WorldWideScience

Sample records for cms tier-2 sites

  1. Operational experience with CMS Tier-2 sites

    International Nuclear Information System (INIS)

    Gonzalez Caballero, I

    2010-01-01

    In the CMS computing model, more than one third of the computing resources are located at Tier-2 sites, which are distributed across the countries in the collaboration. These sites are the primary platform for user analyses; they host datasets that are created at Tier-1 sites, and users from all CMS institutes submit analysis jobs that run on those data through grid interfaces. They are also the primary resource for the production of large simulation samples for general use in the experiment. As a result, Tier-2 sites have an interesting mix of organized experiment-controlled activities and chaotic user-controlled activities. CMS currently operates about 40 Tier-2 sites in 22 countries, making the sites a far-flung computational and social network. We describe our operational experience with the sites, touching on our achievements, the lessons learned, and the challenges for the future.

  2. Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

    International Nuclear Information System (INIS)

    Letts, J; Magini, N

    2011-01-01

    Tier-2 to Tier-2 data transfers have been identified as a necessary extension of the CMS computing model. The Debugging Data Transfers (DDT) Task Force in CMS was charged with commissioning Tier-2 to Tier-2 PhEDEx transfer links beginning in late 2009, originally to serve the needs of physics analysis groups for the transfer of their results between the storage elements of the Tier-2 sites associated with the groups. PhEDEx is the data transfer middleware of the CMS experiment. For analysis jobs using CRAB, the CMS Remote Analysis Builder, the challenges of remote stage out of job output at the end of the analysis jobs led to the introduction of a local fallback stage out, and will eventually require the asynchronous transfer of user data over essentially all of the Tier-2 to Tier-2 network using the same PhEDEx infrastructure. In addition, direct file sharing of physics and Monte Carlo simulated data between Tier-2 sites can relieve the operational load of the Tier-1 sites in the original CMS Computing Model, and already represents an important component of CMS PhEDEx data transfer volume. The experience, challenges and methods used to debug and commission the thousands of data transfers links between CMS Tier-2 sites world-wide are explained and summarized. The resulting operational experience with Tier-2 to Tier-2 transfers is also presented.

  3. Large Scale Commissioning and Operational Experience with Tier-2 to Tier-2 Data Transfer Links in CMS

    CERN Document Server

    Letts, James

    2010-01-01

    Tier-2 to Tier-2 data transfers have been identified as a necessary extension of the CMS computing model. The Debugging Data Transfers (DDT) Task Force in CMS was charged with commissioning Tier-2 to Tier-2 PhEDEx transfer links beginning in late 2009, originally to serve the needs of physics analysis groups for the transfer of their results between the storage elements of the Tier-2 sites associated with the groups. PhEDEx is the data transfer middleware of the CMS experiment. For analysis jobs using CRAB, the CMS Remote Analysis Builder, the challenges of remote stage out of job output at the end of the analysis jobs led to the introduction of a local fallback stage out, and will eventually require the asynchronous transfer of user data over essentially all of the Tier-2 to Tier-2 network using the same PhEDEx infrastructure. In addition, direct file sharing of physics and Monte Carlo simulated data between Tier-2 sites can relieve the operational load of the Tier-1 sites in the original CMS Computing Model...

  4. Exercising CMS dataflows and workflows in computing challenges at the SpanishTier-1 and Tier-2 sites

    International Nuclear Information System (INIS)

    Caballero, J; Colino, N; Peris, A D; G-Abia, P; Hernandez, J M; R-Calonge, F J; Cabrillo, I; Caballero, I G; Marco, R; Matorras, F; Flix, J; Merino, G

    2008-01-01

    An overview of the data transfer, processing and analysis operations conducted at the Spanish Tier-1 (PIC, Barcelona) and Tier-2 (CIEMAT-Madrid and IFCA-Santander federation) centres during the past CMS CSA06 Computing, Software and Analysis challenge and in preparation for CSA07 is presented

  5. Exercising CMS dataflows and workflows in computing challenges at the SpanishTier-1 and Tier-2 sites

    Energy Technology Data Exchange (ETDEWEB)

    Caballero, J; Colino, N; Peris, A D; G-Abia, P; Hernandez, J M; R-Calonge, F J [CIEMAT, Madrid (Spain); Cabrillo, I; Caballero, I G; Marco, R; Matorras, F [IFCA, Santander (Spain); Flix, J; Merino, G [PIC, Barcelona (Spain)], E-mail: jose.hernandez@ciemat.es

    2008-07-15

    An overview of the data transfer, processing and analysis operations conducted at the Spanish Tier-1 (PIC, Barcelona) and Tier-2 (CIEMAT-Madrid and IFCA-Santander federation) centres during the past CMS CSA06 Computing, Software and Analysis challenge and in preparation for CSA07 is present0008.

  6. Experience building and operating the CMS Tier-1 computing centres

    Science.gov (United States)

    Albert, M.; Bakken, J.; Bonacorsi, D.; Brew, C.; Charlot, C.; Huang, Chih-Hao; Colling, D.; Dumitrescu, C.; Fagan, D.; Fassi, F.; Fisk, I.; Flix, J.; Giacchetti, L.; Gomez-Ceballos, G.; Gowdy, S.; Grandi, C.; Gutsche, O.; Hahn, K.; Holzman, B.; Jackson, J.; Kreuzer, P.; Kuo, C. M.; Mason, D.; Pukhaeva, N.; Qin, G.; Quast, G.; Rossman, P.; Sartirana, A.; Scheurer, A.; Schott, G.; Shih, J.; Tader, P.; Thompson, R.; Tiradani, A.; Trunov, A.

    2010-04-01

    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.

  7. Experience building and operating the CMS Tier-1 computing centres

    International Nuclear Information System (INIS)

    Albert, M; Bakken, J; Huang, Chih-Hao; Dumitrescu, C; Fagan, D; Fisk, I; Giacchetti, L; Gutsche, O; Holzman, B; Bonacorsi, D; Grandi, C; Brew, C; Jackson, J; Charlot, C; Colling, D; Fassi, F; Flix, J; Gomez-Ceballos, G; Hahn, K; Gowdy, S

    2010-01-01

    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.

  8. The US-CMS Tier-1 Center Network Evolving toward 100Gbps

    International Nuclear Information System (INIS)

    Bobyshev, A; DeMar, P

    2011-01-01

    Fermilab hosts the US Tier-1 Center for the LHC's Compact Muon Collider (CMS) experiment. The Tier-1s are the central points for the processing and movement of LHC data. They sink raw data from the Tier-0 at CERN, process and store it locally, and then distribute the processed data to Tier-2s for simulation studies and analysis. The Fermilab Tier-1 Center is the largest of the CMS Tier-1s, accounting for roughly 35% of the experiment's Tier-1 computing and storage capacity. Providing capacious, resilient network services, both in terms of local network infrastructure and off-site data movement capabilities, presents significant challenges. This article will describe the current architecture, status, and near term plans for network support of the US-CMS Tier-1 facility.

  9. CMS tier structure and operation of the experiment-specific tasks in Germany

    International Nuclear Information System (INIS)

    Nowack, A

    2008-01-01

    In Germany, several university institutes and research centres take part in the CMS experiment. Concerning the data analysis, a couple of computing centres at different Tier levels, ranging from Tier 1 to Tier 3, exists at these places. The German Tier 1 centre GridKa at the research centre at Karlsruhe serves all four LHC experiments as well as four non-LHC experiments. With respect to the CMS experiment, GridKa is mainly involved in central tasks. The Tier 2 centre in Germany consists of two sites, one at the research centre DESY at Hamburg and one at RWTH Aachen University, forming a federated Tier 2 centre. Both parts cover different aspects of a Tier 2 centre. The German Tier 3 centres are located at the research centre DESY at Hamburg, at RWTH Aachen University, and at the University of Karlsruhe. Furthermore the building of a German user analysis facility is planned. Since the CMS community in German is rather small, a good cooperation between the different sites is essential. This cooperation includes physical topics as well as technical and operational issues. All available communication channels such as email, phone, monthly video conferences, and regular personal meetings are used. For example, the distribution of data sets is coordinated globally within Germany. Also the CMS-specific services such as the data transfer tool PhEDEx or the Monte Carlo production are operated by people from different sites in order to spread the knowledge widely and increase the redundancy in terms of operators

  10. Tier-1 and Tier-2 real-time analysis experience in CMS Data Challenge 2004

    CERN Document Server

    De Filippis, N; Pierro, A; Silvestris, L; Fanfani, A; Grandi, C; Hernández, J M; Bonacorsi, D; Corvo, M; Fanzago, F

    2005-01-01

    During the CMS Data Challenge 2004 a real-time analysis was attempted at INFN and PIC Tier-1 and Tier-2s in order to test the ability of the instrumented methods to quickly process the data. Several agents and automatic procedures were implemented to perform the analysis at the Tier-1/2 synchronously with the data transfer from Tier-0 at CERN. The system was implemented in the LCG-2 Grid environment and allowed on-the-fly job preparation and subsequent submission to the Resource Broker as new data came along. Running job accessed data from the Storage Elements via remote file protocol, whenever possible, or copying them locally with replica manager commands. Details of the procedures adopted to run the analysis jobs and the expected results are described. An evaluation of the ability of the system to maintain an analysis rate at Tier-1 and Tier-2 comparable with the data transfer rate is also presented. The results on the analysis timeline, the statistics of submitted jobs, the overall efficiency of the GRID ...

  11. The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers

    International Nuclear Information System (INIS)

    Bonacorsi, D; Bartolome, I Cabrillo; Matorras, F; Gonzalez Caballero, I; Sartirana, A

    2010-01-01

    Approaching LHC data taking, the CMS experiment is deploying, commissioning and operating the building tools of its grid-based computing infrastructure. The commissioning program includes testing, deployment and operation of various storage solutions to support the computing workflows of the experiment. Recently, some of the Tier-1 and Tier-2 centers supporting the collaboration have started to deploy StoRM based storage systems. These are POSIX-based disk storage systems on top of which StoRM implements the Storage Resource Manager (SRM) version 2 interface allowing for a standard-based access from the Grid. In this notes we briefly describe the experience so far achieved at the CNAF Tier-1 center and at the IFCA Tier-2 center.

  12. WHALE, a management tool for Tier-2 LCG sites

    Science.gov (United States)

    Barone, L. M.; Organtini, G.; Talamo, I. G.

    2012-12-01

    The LCG (Worldwide LHC Computing Grid) is a grid-based hierarchical computing distributed facility, composed of more than 140 computing centers, organized in 4 tiers, by size and offer of services. Every site, although indipendent for many technical choices, has to provide services with a well-defined set of interfaces. For this reason, different LCG sites need frequently to manage very similar situations, like jobs behaviour on the batch system, dataset transfers between sites, operating system and experiment software installation and configuration, monitoring of services. In this context we created WHALE (WHALE Handles Administration in an LCG Environment), a software actually used at the T2_IT_Rome site, an LCG Tier-2 for the CMS experiment. WHALE is a generic, site independent tool written in Python: it allows administrator to interact in a uniform and coherent way with several subsystems using a high level syntax which hides specific commands. The architecture of WHALE is based on the plugin concept and on the possibility of connecting the output of a plugin to the input of the next one, in a pipe-like system, giving the administrator the possibility of making complex functions by combining the simpler ones. The core of WHALE just handles the plugin orchestrations, while even the basic functions (eg. the WHALE activity logging) are performed by plugins, giving the capability to tune and possibly modify every component of the system. WHALE already provides many plugins useful for a LCG site and some more for a Tier-2 of the CMS experiment, especially in the field of job management, dataset transfer and analysis of performance results and availability tests (eg. Nagios tests, SAM tests). Thanks to its architecture and the provided plugins WHALE makes easy to perform tasks that, even if logically simple, are technically complex or tedious, like eg. closing all the worker nodes with a job-failure rate greater than a given threshold. Finally, thanks to the

  13. WHALE, a management tool for Tier-2 LCG sites

    International Nuclear Information System (INIS)

    Barone, L M; Organtini, G; Talamo, I G

    2012-01-01

    The LCG (Worldwide LHC Computing Grid) is a grid-based hierarchical computing distributed facility, composed of more than 140 computing centers, organized in 4 tiers, by size and offer of services. Every site, although indipendent for many technical choices, has to provide services with a well-defined set of interfaces. For this reason, different LCG sites need frequently to manage very similar situations, like jobs behaviour on the batch system, dataset transfers between sites, operating system and experiment software installation and configuration, monitoring of services. In this context we created WHALE (WHALE Handles Administration in an LCG Environment), a software actually used at the T2 I T R ome site, an LCG Tier-2 for the CMS experiment. WHALE is a generic, site independent tool written in Python: it allows administrator to interact in a uniform and coherent way with several subsystems using a high level syntax which hides specific commands. The architecture of WHALE is based on the plugin concept and on the possibility of connecting the output of a plugin to the input of the next one, in a pipe-like system, giving the administrator the possibility of making complex functions by combining the simpler ones. The core of WHALE just handles the plugin orchestrations, while even the basic functions (eg. the WHALE activity logging) are performed by plugins, giving the capability to tune and possibly modify every component of the system. WHALE already provides many plugins useful for a LCG site and some more for a Tier-2 of the CMS experiment, especially in the field of job management, dataset transfer and analysis of performance results and availability tests (eg. Nagios tests, SAM tests). Thanks to its architecture and the provided plugins WHALE makes easy to perform tasks that, even if logically simple, are technically complex or tedious, like eg. closing all the worker nodes with a job-failure rate greater than a given threshold. Finally, thanks to the

  14. The architecture and operation of the CMS Tier-0

    International Nuclear Information System (INIS)

    Hufnagel, Dirk

    2011-01-01

    The Tier-0 processing system is the initial stage of the multi-tiered computing system of CMS. It takes care of the first processing steps of data at the LHC at CERN. The automated workflows running in the Tier-0 contain both low-latency processing chains for time-critical applications and bulk chains to archive the recorded data offsite the host laboratory. It is a mix between an online and offline system, because the data the CMS DAQ writes out initially is of a temporary nature. Most of the complexity in the design of this system comes from this unique combination of online and offline use cases and dependencies. In this talk, we want to present the software design of the CMS Tier-0 system and present an analysis of the 24/7 operation of the system in the 2009/2010 data taking periods.

  15. CMS distributed computing workflow experience

    Science.gov (United States)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D.; Prosper, Harrison B.; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao, Junhui; Pin, Arnaud; Schul, Nicolas; De Lentdecker, Gilles; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey; Barge, Derek; Lahiff, Andrew

    2011-12-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  16. CMS distributed computing workflow experience

    International Nuclear Information System (INIS)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D; Prosper, Harrison B; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao Junhui; Pin, Arnaud; Schul, Nicolas; Lentdecker, Gilles De; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey

    2011-01-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  17. Scaling up a CMS tier-3 site with campus resources and a 100 Gb/s network connection: what could go wrong?

    Science.gov (United States)

    Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Tovar, Benjamin; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2017-10-01

    The University of Notre Dame (ND) CMS group operates a modest-sized Tier-3 site suitable for local, final-stage analysis of CMS data. However, through the ND Center for Research Computing (CRC), Notre Dame researchers have opportunistic access to roughly 25k CPU cores of computing and a 100 Gb/s WAN network link. To understand the limits of what might be possible in this scenario, we undertook to use these resources for a wide range of CMS computing tasks from user analysis through large-scale Monte Carlo production (including both detector simulation and data reconstruction.) We will discuss the challenges inherent in effectively utilizing CRC resources for these tasks and the solutions deployed to overcome them.

  18. CMS Experiment Data Processing at RDMS CMS Tier 2 Centers

    CERN Document Server

    Gavrilov, V; Korenkov, V; Tikhonenko, E; Shmatov, S; Zhiltsov, V; Ilyin, V; Kodolova, O; Levchuk, L

    2012-01-01

    Russia and Dubna Member States (RDMS) CMS collaboration was founded in the year 1994 [1]. The RDMS CMS takes an active part in the Compact Muon Solenoid (CMS) Collaboration [2] at the Large Hadron Collider (LHC) [3] at CERN [4]. RDMS CMS Collaboration joins more than twenty institutes from Russia and Joint Institute for Nuclear Research (JINR) member states. RDMS scientists, engineers and technicians were actively participating in design, construction and commissioning of all CMS sub-detectors in forward regions. RDMS CMS physics program has been developed taking into account the essential role of these sub-detectors for the corresponding physical channels. RDMS scientists made large contribution for preparation of study QCD, Electroweak, Exotics, Heavy Ion and other physics at CMS. The overview of RDMS CMS physics tasks and RDMS CMS computing activities are presented in [5-11]. RDMS CMS computing support should satisfy the LHC data processing and analysis requirements at the running phase of the CMS experime...

  19. CMS Distributed Computing Workflow Experience

    CERN Document Server

    Haas, Jeffrey David

    2010-01-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simul...

  20. CMS results in the Combined Computing Readiness Challenge CCRC'08

    International Nuclear Information System (INIS)

    Bonacorsi, D.; Bauerdick, L.

    2009-01-01

    During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this worldwide exercise was to check the readiness of the Computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also running at the same time: CCRC augmented the load on computing with additional tests to validate and stress-test all CMS computing workflows at full data taking scale, also extending this to the global WLCG community. CMS exercised most aspects of the CMS computing model, with very comprehensive tests. During May 2008, CMS moved more than 3.6 Petabytes among more than 300 links in the complex Grid topology. CMS demonstrated that is able to safely move data out of CERN to the Tier-1 sites, sustaining more than 600 MB/s as a daily average for more than seven days in a row, with enough headroom and with hourly peaks of up to 1.7 GB/s. CMS ran hundreds of simultaneous jobs at each Tier-1 site, re-reconstructing and skimming hundreds of millions of events. After re-reconstruction the fresh AOD (Analysis Object Data) has to be synchronized between Tier-1 centers: CMS demonstrated that the required inter-Tier-1 transfers are achievable within a few days. CMS also showed that skimmed analysis data sets can be transferred to Tier-2 sites for analysis at sufficient rate, regionally as well as inter-regionally, achieving all goals in about 90% of >200 links. Simultaneously, CMS also ran a large Tier-2 analysis exercise, where realistic analysis jobs were submitted to a large set of Tier-2 sites by a large number of people to produce a chaotic workload across the systems, and with more than 400 analysis users in May. Taken all together, CMS routinely achieved submissions of 100k jobs/day, with peaks up to 200k jobs/day. The achieved results in CCRC'08 - focussing on the distributed

  1. CMS Data Transfer operations after the first years of LHC collisions

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    CMS experiment possesses distributed computing infrastructure and its performance heavily depends on the fast and smooth distribution of data between different CMS sites. Data must be transferred from the Tier-0 (CERN) to the Tier-1 for storing and archiving, and time and good quality are vital to avoid overflowing CERN storage buffers. At the same time, processed data has to be distributed from Tier-1 sites to all Tier-2 sites for physics analysis while MonteCarlo simulations synchronized back to Tier-1 sites for further archival. At the core of all transferring machinery is PhEDEx (Physics Experiment Data Export) data transfer system. It is very important to ensure reliable operation of the system, and the operational tasks comprise monitoring and debugging all transfer issues. Based on transfer quality information Site Readiness tool is used to create plans for resources utilization in the future. We review the operational procedures created to enforce reliable data delivery to CMS distributed sites all ov...

  2. Deployment of the CMS software on the WLCG Grid

    International Nuclear Information System (INIS)

    Behrenhoff, W; Wissing, C; Kim, B; Blyweert, S; D'Hondt, J; Maes, J; Maes, M; Mulders, P Van; Villella, I; Vanelderen, L

    2011-01-01

    The CMS Experiment is taking high energy collision data at CERN. The computing infrastructure used to analyse the data is distributed round the world in a tiered structure. In order to use the 7 Tier-1 sites, the 50 Tier-2 sites and a still growing number of about 30 Tier-3 sites, the CMS software has to be available at those sites. Except for a very few sites the deployment and the removal of CMS software is managed centrally. Since the deployment team has no local accounts at the remote sites all installation jobs have to be sent via Grid jobs. Via a VOMS role the job has a high priority in the batch system and gains write privileges to the software area. Due to the lack of interactive access the installation jobs must be very robust against possible failures, in order not to leave a broken software installation. The CMS software is packaged in RPMs that are installed in the software area independent of the host OS. The apt-get tool is used to resolve package dependencies. This paper reports about the recent deployment experiences and the achieved performance.

  3. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, J; Sartirana, A

    2001-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on thei...

  4. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, Jose

    2010-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on the...

  5. VIPRAM_L1CMS: a 2-Tier 3D Architecture for Pattern Recognition for Track Finding

    Energy Technology Data Exchange (ETDEWEB)

    Hoff, J. R. [Fermilab; Joshi, Joshi,S. [Northwestern U.; Liu, Liu, [Fermilab; Olsen, J. [Fermilab; Shenai, A. [Fermilab

    2017-06-15

    In HEP tracking trigger applications, flagging an individual detector hit is not important. Rather, the path of a charged particle through many detector layers is what must be found. Moreover, given the increased luminosity projected for future LHC experiments, this type of track finding will be required within the Level 1 Trigger system. This means that future LHC experiments require not just a chip capable of high-speed track finding but also one with a high-speed readout architecture. VIPRAM_L1CMS is 2-Tier Vertically Integrated chip designed to fulfill these requirements. It is a complete pipelined Pattern Recognition Associative Memory (PRAM) architecture including pattern recognition, result sparsification, and readout for Level 1 trigger applications in CMS with 15-bit wide detector addresses and eight detector layers included in the track finding. Pattern recognition is based on classic Content Addressable Memories with a Current Race Scheme to reduce timing complexity and a 4-bit Selective Precharge to minimize power consumption. VIPRAM_L1CMS uses a pipelined set of priority-encoded binary readout structures to sparsify and readout active road flags at frequencies of at least 100MHz. VIPRAM_L1CMS is designed to work directly with the Pulsar2b Architecture.

  6. German contributions to the CMS computing infrastructure

    International Nuclear Information System (INIS)

    Scheurer, A

    2010-01-01

    The CMS computing model anticipates various hierarchically linked tier centres to counter the challenges provided by the enormous amounts of data which will be collected by the CMS detector at the Large Hadron Collider, LHC, at CERN. During the past years, various computing exercises were performed to test the readiness of the computing infrastructure, the Grid middleware and the experiment's software for the startup of the LHC which took place in September 2008. In Germany, several tier sites are set up to allow for an efficient and reliable way to simulate possible physics processes as well as to reprocess, analyse and interpret the numerous stored collision events of the experiment. It will be shown that the German computing sites played an important role during the experiment's preparation phase and during data-taking of CMS and, therefore, scientific groups in Germany will be ready to compete for discoveries in this new era of particle physics. This presentation focuses on the German Tier-1 centre GridKa, located at Forschungszentrum Karlsruhe, the German CMS Tier-2 federation DESY/RWTH with installations at the University of Aachen and the research centre DESY. In addition, various local computing resources in Aachen, Hamburg and Karlsruhe are briefly introduced as well. It will be shown that an excellent cooperation between the different German institutions and physicists led to well established computing sites which cover all parts of the CMS computing model. Therefore, the following topics are discussed and the achieved goals and the gained knowledge are depicted: data management and distribution among the different tier sites, Grid-based Monte Carlo production at the Tier-2 as well as Grid-based and locally submitted inhomogeneous user analyses at the Tier-3s. Another important task is to ensure a proper and reliable operation 24 hours a day, especially during the time of data-taking. For this purpose, the meta-monitoring tool 'HappyFace', which was

  7. Understanding the T2 traffic in CMS during Run-1

    Science.gov (United States)

    T, Wildish

    2015-12-01

    In the run-up to Run-1 CMS was operating its facilities according to the MONARC model, where data-transfers were strictly hierarchical in nature. Direct transfers between Tier-2 nodes was excluded, being perceived as operationally intensive and risky in an era where the network was expected to be a major source of errors. By the end of Run-1 wide-area networks were more capable and stable than originally anticipated. The original data-placement model was relaxed, and traffic was allowed between Tier-2 nodes. Tier-2 to Tier-2 traffic in 2012 already exceeded the amount of Tier-2 to Tier-1 traffic, so it clearly has the potential to become important in the future. Moreover, while Tier-2 to Tier-1 traffic is mostly upload of Monte Carlo data, the Tier-2 to Tier-2 traffic represents data moved in direct response to requests from the physics analysis community. As such, problems or delays there are more likely to have a direct impact on the user community. Tier-2 to Tier-2 traffic may also traverse parts of the WAN that are at the 'edge' of our network, with limited network capacity or reliability compared to, say, the Tier-0 to Tier-1 traffic which goes the over LHCOPN network. CMS is looking to exploit technologies that allow us to interact with the network fabric so that it can manage our traffic better for us, this we hope to achieve before the end of Run-2. Tier-2 to Tier-2 traffic would be the most interesting use-case for such traffic management, precisely because it is close to the users' analysis and far from the 'core' network infrastructure. As such, a better understanding of our Tier-2 to Tier-2 traffic is important. Knowing the characteristics of our data-flows can help us place our data more intelligently. Knowing how widely the data moves can help us anticipate the requirements for network capacity, and inform the dynamic data placement algorithms we expect to have in place for Run-2. This paper presents an analysis of the CMS Tier-2 traffic during Run 1.

  8. Improving CMS data transfers among its distributed computing facilities

    International Nuclear Information System (INIS)

    Flix, J; Magini, N; Sartirana, A

    2011-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on their usage, customizing the topologies and improving their setup in order to keep CMS transferring data at the desired levels in a reliable and robust way.

  9. The Legnaro-Padova distributed Tier-2: challenges and results

    Science.gov (United States)

    Badoer, Simone; Biasotto, Massimo; Costa, Fulvia; Crescente, Alberto; Fantinel, Sergio; Ferrari, Roberto; Gulmini, Michele; Maron, Gaetano; Michelotto, Michele; Sgaravatto, Massimo; Toniolo, Nicola

    2014-06-01

    The Legnaro-Padova Tier-2 is a computing facility serving the ALICE and CMS LHC experiments. It also supports other High Energy Physics experiments and other virtual organizations of different disciplines, which can opportunistically harness idle resources if available. The unique characteristic of this Tier-2 is its topology: the computational resources are spread in two different sites, about 15 km apart: the INFN Legnaro National Laboratories and the INFN Padova unit, connected through a 10 Gbps network link (it will be soon updated to 20 Gbps). Nevertheless these resources are seamlessly integrated and are exposed as a single computing facility. Despite this intrinsic complexity, the Legnaro-Padova Tier-2 ranks among the best Grid sites for what concerns reliability and availability. The Tier-2 comprises about 190 worker nodes, providing about 26000 HS06 in total. Such computing nodes are managed by the LSF local resource management system, and are accessible using a Grid-based interface implemented through multiple CREAM CE front-ends. dCache, xrootd and Lustre are the storage systems in use at the Tier-2: about 1.5 PB of disk space is available to users in total, through multiple access protocols. A 10 Gbps network link, planned to be doubled in the next months, connects the Tier-2 to WAN. This link is used for the LHC Open Network Environment (LHCONE) and for other general purpose traffic. In this paper we discuss about the experiences at the Legnaro-Padova Tier-2: the problems that had to be addressed, the lessons learned, the implementation choices. We also present the tools used for the daily management operations. These include DOCET, a Java-based webtool designed, implemented and maintained at the Legnaro-Padova Tier-2, and deployed also in other sites, such as the LHC Italian T1. DOCET provides an uniform interface to manage all the information about the physical resources of a computing center. It is also used as documentation repository available to

  10. The Legnaro-Padova distributed Tier-2: challenges and results

    International Nuclear Information System (INIS)

    Badoer, Simone; Biasotto, Massimo; Fantinel, Sergio

    2014-01-01

    The Legnaro-Padova Tier-2 is a computing facility serving the ALICE and CMS LHC experiments. It also supports other High Energy Physics experiments and other virtual organizations of different disciplines, which can opportunistically harness idle resources if available. The unique characteristic of this Tier-2 is its topology: the computational resources are spread in two different sites, about 15 km apart: the INFN Legnaro National Laboratories and the INFN Padova unit, connected through a 10 Gbps network link (it will be soon updated to 20 Gbps). Nevertheless these resources are seamlessly integrated and are exposed as a single computing facility. Despite this intrinsic complexity, the Legnaro-Padova Tier-2 ranks among the best Grid sites for what concerns reliability and availability. The Tier-2 comprises about 190 worker nodes, providing about 26000 HS06 in total. Such computing nodes are managed by the LSF local resource management system, and are accessible using a Grid-based interface implemented through multiple CREAM CE front-ends. dCache, xrootd and Lustre are the storage systems in use at the Tier-2: about 1.5 PB of disk space is available to users in total, through multiple access protocols. A 10 Gbps network link, planned to be doubled in the next months, connects the Tier-2 to WAN. This link is used for the LHC Open Network Environment (LHCONE) and for other general purpose traffic. In this paper we discuss about the experiences at the Legnaro-Padova Tier-2: the problems that had to be addressed, the lessons learned, the implementation choices. We also present the tools used for the daily management operations. These include DOCET, a Java-based webtool designed, implemented and maintained at the Legnaro-Padova Tier-2, and deployed also in other sites, such as the LHC Italian T1. DOCET provides an uniform interface to manage all the information about the physical resources of a computing center. It is also used as documentation repository available to

  11. Storageless and caching Tier-2 models in the UK context

    Science.gov (United States)

    Cadellin Skipsey, Samuel; Dewhurst, Alastair; Crooks, David; MacMahon, Ewan; Roy, Gareth; Smith, Oliver; Mohammed, Kashif; Brew, Chris; Britton, David

    2017-10-01

    Operational and other pressures have lead to WLCG experiments moving increasingly to a stratified model for Tier-2 resources, where “fat” Tier-2s (“T2Ds”) and “thin” Tier-2s (“T2Cs”) provide different levels of service. In the UK, this distinction is also encouraged by the terms of the current GridPP5 funding model. In anticipation of this, testing has been performed on the implications, and potential implementation, of such a distinction in our resources. In particular, this presentation presents the results of testing of storage T2Cs, where the “thin” nature is expressed by the site having either no local data storage, or only a thin caching layer; data is streamed or copied from a “nearby” T2D when needed by jobs. In OSG, this model has been adopted successfully for CMS AAA sites; but the network topology and capacity in the USA is significantly different to that in the UK (and much of Europe). We present the result of several operational tests: the in-production University College London (UCL) site, which runs ATLAS workloads using storage at the Queen Mary University of London (QMUL) site; the Oxford site, which has had scaling tests performed against T2Ds in various locations in the UK (to test network effects); and the Durham site, which has been testing the specific ATLAS caching solution of “Rucio Cache” integration with ARC’s caching layer.

  12. Storage element performance optimization for CMS analysis jobs

    International Nuclear Information System (INIS)

    Behrmann, G; Dahlblom, J; Guldmyr, J; Happonen, K; Lindén, T

    2012-01-01

    Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resources (Compute Element, CE) and storage resources (Storage Element, SE). The vast amount of data that needs to processed from the Large Hadron Collider (LHC) experiments requires good and efficient use of the available resources. Having a good CPU efficiency for the end users analysis jobs requires that the performance of the storage system is able to scale with I/O requests from hundreds or even thousands of simultaneous jobs. In this presentation we report on the work on improving the SE performance at the Helsinki Institute of Physics (HIP) Tier-2 used for the Compact Muon Experiment (CMS) at the LHC. Statistics from CMS grid jobs are collected and stored in the CMS Dashboard for further analysis, which allows for easy performance monitoring by the sites and by the CMS collaboration. As part of the monitoring framework CMS uses the JobRobot which sends every four hours 100 analysis jobs to each site. CMS also uses the HammerCloud tool for site monitoring and stress testing and it has replaced the JobRobot. The performance of the analysis workflow submitted with JobRobot or HammerCloud can be used to track the performance due to site configuration changes, since the analysis workflow is kept the same for all sites and for months in time. The CPU efficiency of the JobRobot jobs at HIP was increased approximately by 50 % to more than 90 %, by tuning the SE and by improvements in the CMSSW and dCache software. The performance of the CMS analysis jobs improved significantly too. Similar work has been done on other CMS Tier-sites, since on average the CPU efficiency for CMSSW jobs has increased during 2011. Better monitoring of the SE allows faster detection of problems, so that the performance level can be kept high. The next storage upgrade at HIP consists of SAS disk enclosures which can be stress tested on demand with HammerCloud workflows, to make sure that the I

  13. Performance studies and improvements of CMS distributed data transfers

    International Nuclear Information System (INIS)

    Bonacorsi, D; Flix, J; Kaselis, R; Magini, N; Letts, J; Sartirana, A

    2012-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered distributed infrastructures. CMS experiment relies on File Transfer Services (FTS) for data distribution, a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centers and used by all the computing sites in CMS, subject to established CMS and sites setup policies, including all the virtual organizations making use of the Grid resources at the site, and properly dimensioned to satisfy all the requirements for them. Managing the service efficiently needs good knowledge of the CMS needs for all kind of transfer routes, and the sharing and interference with other VOs using the same FTS transfer managers. This contribution deals with a complete revision of all FTS servers used by CMS, customizing the topologies and improving their setup in order to keep CMS transferring data to the desired levels, as well as performance studies for all kind of transfer routes, including overheads measurements introduced by SRM servers and storage systems, FTS server misconfigurations and identification of congested channels, historical transfer throughputs per stream, file-latency studies,… This information is retrieved directly from the FTS servers through the FTS Monitor webpages and conveniently archived for further analysis. The project provides an interface for all these values, to ease the analysis of the data.

  14. User and group storage management the CMS CERN T2 centre

    Science.gov (United States)

    Cerminara, G.; Franzoni, G.; Pfeiffer, A.

    2015-12-01

    A wide range of detector commissioning, calibration and data analysis tasks is carried out by CMS using dedicated storage resources available at the CMS CERN Tier-2 centre. Relying on the functionalities of the EOS disk-only storage technology, the optimal exploitation of the CMS user/group resources has required the introduction of policies for data access management, data protection, cleanup campaigns based on access pattern, and long term tape archival. The resource management has been organised around the definition of working groups and the delegation to an identified responsible of each group composition. In this paper we illustrate the user/group storage management, and the development and operational experience at the CMS CERN Tier-2 centre in the 2012-2015 period.

  15. User and group storage management the CMS CERN T2 centre

    CERN Document Server

    Cerminara, G; Pfeiffer, A

    2015-01-01

    A wide range of detector commissioning, calibration and data analysis tasks is carried out by CMS using dedicated storage resources available at the CMS CERN Tier-2 centre. Relying on the functionalities of the EOS disk-only storage technology, the optimal exploitation of the CMS user/group resources has required the introduction of policies for data access management, data protection, cleanup campaigns based on access pattern, and long term tape archival. The resource management has been organised around the definition of working groups and the delegation to an identified responsible of each group composition. In this paper we illustrate the user/group storage management, and the development and operational experience at the CMS CERN Tier-2 centre in the 2012-2015 period.

  16. CMS readiness for multi-core workload scheduling

    Science.gov (United States)

    Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.; Aftab Khan, F.; Letts, J.; Mason, D.; Verguilov, V.

    2017-10-01

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.

  17. CMS Readiness for Multi-Core Workload Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Calero Yzquierdo, A. [Madrid, CIEMAT; Balcas, J. [Caltech; Hernandez, J. [Madrid, CIEMAT; Aftab Khan, F. [NCP, Islamabad; Letts, J. [UC, San Diego; Mason, D. [Fermilab; Verguilov, V. [CLMI, Sofia

    2017-11-22

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.

  18. The CMS Integration Grid Testbed

    CERN Document Server

    Graham, G E; Aziz, Shafqat; Bauerdick, L.A.T.; Ernst, Michael; Kaiser, Joseph; Ratnikova, Natalia; Wenzel, Hans; Wu, Yu-jun; Aslakson, Erik; Bunn, Julian; Iqbal, Saima; Legrand, Iosif; Newman, Harvey; Singh, Suresh; Steenberg, Conrad; Branson, James; Fisk, Ian; Letts, James; Arbree, Adam; Avery, Paul; Bourilkov, Dimitri; Cavanaugh, Richard; Rodriguez, Jorge Luis; Kategari, Suchindra; Couvares, Peter; DeSmet, Alan; Livny, Miron; Roy, Alain; Tannenbaum, Todd; Graham, Gregory E.; Aziz, Shafqat; Ernst, Michael; Kaiser, Joseph; Ratnikova, Natalia; Wenzel, Hans; Wu, Yujun; Aslakson, Erik; Bunn, Julian; Iqbal, Saima; Legrand, Iosif; Newman, Harvey; Singh, Suresh; Steenberg, Conrad; Branson, James; Fisk, Ian; Letts, James; Arbree, Adam; Avery, Paul; Bourilkov, Dimitri; Cavanaugh, Richard; Rodriguez, Jorge; Kategari, Suchindra; Couvares, Peter; Smet, Alan De; Livny, Miron; Roy, Alain; Tannenbaum, Todd

    2003-01-01

    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distrib ution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuo us two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. ...

  19. Understanding the T2 traffic in CMS during Run-1

    CERN Document Server

    T, Wildish

    2015-01-01

    In the run-up to Run-1 CMS was operating its facilities according to the MONARC model, where data-transfers were strictly hierarchical in nature. Direct transfers between Tier-2 nodes was excluded, being perceived as operationally intensive and risky in an era where the network was expected to be a major source of errors. By the end of Run-1 wide-area networks were more capable and stable than originally anticipated. The original data-placement model was relaxed, and traffic was allowed between Tier-2 nodes.Tier-2 to Tier-2 traffic in 2012 already exceeded the amount of Tier-2 to Tier-1 traffic, so it clearly has the potential to become important in the future. Moreover, while Tier-2 to Tier-1 traffic is mostly upload of Monte Carlo data, the Tier-2 to Tier-2 traffic represents data moved in direct response to requests from the physics analysis community. As such, problems or delays there are more likely to have a direct impact on the user community.Tier-2 to Tier-2 traffic may also traverse parts of the WAN ...

  20. Opportunistic resource usage in CMS

    International Nuclear Information System (INIS)

    Kreuzer, Peter; Hufnagel, Dirk; Dykstra, D; Gutsche, O; Tadel, M; Sfiligoi, I; Letts, J; Wuerthwein, F; McCrea, A; Bockelman, B; Fajardo, E; Linares, L; Wagner, R; Konstantinov, P; Blumenfeld, B; Bradley, D

    2014-01-01

    CMS is using a tiered setup of dedicated computing resources provided by sites distributed over the world and organized in WLCG. These sites pledge resources to CMS and are preparing them especially for CMS to run the experiment's applications. But there are more resources available opportunistically both on the GRID and in local university and research clusters which can be used for CMS applications. We will present CMS' strategy to use opportunistic resources and prepare them dynamically to run CMS applications. CMS is able to run its applications on resources that can be reached through the GRID, through EC2 compliant cloud interfaces. Even resources that can be used through ssh login nodes can be harnessed. All of these usage modes are integrated transparently into the GlideIn WMS submission infrastructure, which is the basis of CMS' opportunistic resource usage strategy. Technologies like Parrot to mount the software distribution via CVMFS and xrootd for access to data and simulation samples via the WAN are used and will be described. We will summarize the experience with opportunistic resource usage and give an outlook for the restart of LHC data taking in 2015.

  1. Opportunistic usage of the CMS online cluster using a cloud overlay

    CERN Document Server

    Chaze, Olivier; Andronidis, Anastasios; Behrens, Ulf; Branson, James; Brummer, Philipp; Contescu, Alexandru-Cristian; Cittolin, Sergio; Craigs, Benjamin; Darlea, Georgiana-Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, M; Doualot, Nicolas; Erhan, Samim; Fulcher, Jonathan Richard; Gigi, Dominique; Glege, Frank; Gomez-Ceballos, Guillelmo; Hegeman, Jeroen; Holzner, Andre Georg; Jimenez-Estupiñán, Raul; Masetti, Lorenzo; Meijers, Frans; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph; Pieri, Marco; Racz, Attila; Sakulin, Hannes; Schwick, Christoph; Reis, Thomas; Simelevicius, Dainius; Zejdl, Petr

    2016-01-01

    After two years of maintenance and upgrade, the Large Hadron Collider (LHC), the largest and most powerful particle accelerator in the world, has started its second three year run. Around 1500 computers make up the CMS (Compact Muon Solenoid) Online cluster. This cluster is used for Data Acquisition of the CMS experiment at CERN, selecting and sending to storage around 20 TBytes of data per day that are then analysed by the Worldwide LHC Computing Grid (WLCG) infrastructure that links hundreds of data centres worldwide. 3000 CMS physicists can access and process data, and are always seeking more computing power and data. The backbone of the CMS Online cluster is composed of 16000 cores which provide as much computing power as all CMS WLCG Tier1 sites (352K HEP-SPEC-06 score in the CMS cluster versus 300K across CMS Tier1 sites). The computing power available in the CMS cluster can significantly speed up the processing of data, so an effort has been made to allocate the resources of the CMS Online cluster to t...

  2. Grid Interoperation with ARC Middleware for the CMS Experiment

    CERN Document Server

    Edelmann, Erik; Frey, Jaime; Gronager, Michael; Happonen, Kalle; Johansson, Daniel; Kleist, Josva; Klem, Jukka; Koivumaki, Jesper; Linden, Tomas; Pirinen, Antti; Qing, Di

    2010-01-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developi...

  3. The CMS integration grid testbed

    Energy Technology Data Exchange (ETDEWEB)

    Graham, Gregory E.

    2004-08-26

    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distribution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuous two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. In this paper, we describe the process that led to one of the world's first continuously available, functioning grids.

  4. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  5. Grid Interoperation with ARC middleware for the CMS experiment

    International Nuclear Information System (INIS)

    Edelmann, Erik; Groenager, Michael; Johansson, Daniel; Kleist, Josva; Field, Laurence; Qing, Di; Frey, Jaime; Happonen, Kalle; Klem, Jukka; Koivumaeki, Jesper; Linden, Tomas; Pirinen, Antti

    2010-01-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developing specific ARC plugins in CMS software.

  6. Grid Interoperation with ARC middleware for the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Edelmann, Erik; Groenager, Michael; Johansson, Daniel; Kleist, Josva [Nordic DataGrid Facility, Kastruplundgade 22, 1., DK-2770 Kastrup (Denmark); Field, Laurence; Qing, Di [CERN, CH-1211 Geneve 23 (Switzerland); Frey, Jaime [University of Wisconsin-Madison, 1210 W. Dayton St., Madison, WI (United States); Happonen, Kalle; Klem, Jukka; Koivumaeki, Jesper; Linden, Tomas; Pirinen, Antti, E-mail: Jukka.Klem@cern.c [Helsinki Institute of Physics, PO Box 64, FIN-00014 University of Helsinki (Finland)

    2010-04-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developing specific ARC plugins in CMS software.

  7. The commissioning of CMS sites: Improving the site reliability

    International Nuclear Information System (INIS)

    Belforte, S; Fisk, I; Flix, J; Hernandez, J M; Klem, J; Letts, J; Magini, N; Saiz, P; Sciaba, A

    2010-01-01

    The computing system of the CMS experiment works using distributed resources from more than 60 computing centres worldwide. These centres, located in Europe, America and Asia are interconnected by the Worldwide LHC Computing Grid. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established a procedure to extensively test all relevant aspects of a Grid site, such as the ability to efficiently use their network to transfer data, the functionality of all the site services relevant for CMS and the capability to sustain the various CMS computing workflows at the required scale. This contribution describes in detail the procedure to rate CMS sites depending on their performance, including the complete automation of the program, the description of monitoring tools, and its impact in improving the overall reliability of the Grid from the point of view of the CMS computing system.

  8. Distributed Grid Experiences in CMS DC04

    CERN Document Server

    Fanfani, A; Grandi, C; Legrand, I; Suresh, S; Campana, S; Donno, F; Jank, W; Sinanis, N; Sciabà, A; García-Abia, P; Hernández, J; Ernst, M; Anzar, A; Fisk, I; Giacchetti, L; Graham, G; Heavey, A; Kaiser, J; Kuropatine, N; Perelmutov, T; Pordes, R; Ratnikova, N; Weigand, J; Wu, Y; Colling, D J; MacEvoy, B; Tallini, H; Wakefield, L; De Filippis, N; Donvito, G; Maggi, G; Bonacorsi, D; Dell'Agnello, L; Martelli, B; Biasotto, M; Fantinel, S; Corvo, M; Fanzago, F; Mazzucato, M; Tuura, L; Martin, T; Letts, J; Bockjoo, K; Prescott, C; Rodríguez, J; Zahn, A; Bradley, D

    2005-01-01

    In March-April 2004 the CMS experiment undertook a Data Challenge (DC04). During the previous 8 months CMS undertook a large simulated event production. The goal of the challenge was to run CMS reconstruction for sustained period at 25Hz in put rate, distribute the data to the CMS Tier-1 centers and analyze them at remote sites. Grid environments developed in Europe by the LHC Computing Grid (LCG) and in the US with Grid2003 were utilized to complete the aspects of the challenge. A description of the experiences, successes and lessons learned from both experiences with grid infrastructure is presented.

  9. CMS Connect

    Science.gov (United States)

    Balcas, J.; Bockelman, B.; Gardner, R., Jr.; Hurtado Anampa, K.; Jayatilaka, B.; Aftab Khan, F.; Lannon, K.; Larson, K.; Letts, J.; Marra Da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from opportunistic resources for these kind of tasks and even though production and event processing analysis workflows are already managed by existing tools, there is still a lack of support to submit final stage condor-like analysis jobs familiar to Tier-3 or local Computing Facilities users into these distributed resources in an integrated (with other CMS services) and friendly way. CMS Connect is a set of computing tools and services designed to augment existing services in the CMS Physics community focusing on these kind of condor analysis jobs. It is based on the CI-Connect platform developed by the Open Science Grid and uses the CMS GlideInWMS infrastructure to transparently plug CMS global grid resources into a virtual pool accessed via a single submission machine. This paper describes the specific developments and deployment of CMS Connect beyond the CI-Connect platform in order to integrate the service with CMS specific needs, including specific Site submission, accounting of jobs and automated reporting to standard CMS monitoring resources in an effortless way to their users.

  10. Tier 2 guidelines and remediation of Tebuthiuron on a native prairie site

    Energy Technology Data Exchange (ETDEWEB)

    Bessie, K.; Harckham, N.; Dance, T. [EBA Engineering Consultants Ltd., Calgary, AB (Canada); Burk, A. [EnCana Corp., Calgary, AB (Canada); Stephenson, G. [Stantec Consulting, Guelph, ON (Canada); Corbet, B. [Access Analytical Laboratories Inc., Calgary, AB (Canada)

    2009-10-01

    Tebuthiuron is a sterilant used to control vegetation at upstream and midstream petroleum sites. This article discussed the remediation processes used to reclaim a native prairie site contaminated with tebuthiuron. The site was located within a dry mixed grass natural area. A literature review was conducted to establish soil eco-contact guidelines specific to tebuthiuron. A site-specific ecotoxicity assessment was then conducted using a liquid chromatograph to detect tebuthiuron limits in the contaminated soils. A soil sampling technique was used to delineate the affected areas at the site. Site soils were spiked with various concentrations of tebuthiuron ranging from 0.00003 mg/kg to 3000 mg/kg. Test species included a Folsomia candida, an earthworm, and 4 plant species. The study showed that the invertebrate species were less sensitive to tebuthiuron than the plant species. A groundwater assessment showed that tebuthiuron levels exceeded Tier 1 groundwater remediation guidelines. A multilayer hydro-geological model showed that remediation guidelines were orders of magnitude greater than Tier 1 groundwater remediation. A thermal desorption technique was used to remediate the site. 7 refs., 8 figs.

  11. Proposed Tier 2 Screening Criteria and Tier 3 Field Procedures for Evaluation of Vapor Intrusion (ESTCP Cost and Performance Report)

    Science.gov (United States)

    2012-08-01

    Security Technology Certification Program ETV Environmental Technology Verification GC gas chromatography HGL HydroGeoLogic, Inc . ITRC... Inc . (HGL) for invaluable project support. This page left blank intentionally. 1 1.0 EXECUTIVE SUMMARY 1.1 OBJECTIVES OF THE... NIKE Battery Site PR-58 N. Kingstown, RI Tier 2 Industrial Site Southeast TX Tier 2 Note: * = Tier 2 demonstration not completed due to the

  12. SiteDB: Marshalling people and resources available to CMS

    Energy Technology Data Exchange (ETDEWEB)

    Metson, S [H.H. Wills Physics Laboratory, Bristol (United Kingdom); Bonacorsi, D [University of Bologna and INFN Bologna (Italy); Ferreira, M Dias [SPRACE (Brazil); Egeland, R [University of Minnesota, Twin Cities (United States)

    2010-04-01

    In a collaboration the size of CMS (approx. 3000 users, and almost 100 computing centres of varying size) communication and accurate information about the sites it has access to is vital in co-ordinating the multitude of computing tasks required for smooth running. SiteDB is a tool developed by CMS to track sites available to the collaboration, the allocation to CMS of resources available at those sites and the associations between CMS members and the sites (as either a manager/operator of the site or a member of a group associated to the site). It is used to track the roles a person has for an associated site or group. SiteDB eases the coordination load for the operations teams by providing a consistent interface to manage communication with the people working at a site, by identifying who is responsible for a given task or service at a site and by offering a uniform interface to information on CMS contacts and sites. SiteDB provides api's and reports for other CMS tools to use to access the information it contains, for instance enabling CRAB to use 'user friendly' names when black/white listing CE's, providing role based authentication and authorisation for other web based services and populating various troubleshooting squads in external ticketing systems in use daily by CMS Computing operations.

  13. SiteDB: Marshalling people and resources available to CMS

    International Nuclear Information System (INIS)

    Metson, S; Bonacorsi, D; Ferreira, M Dias; Egeland, R

    2010-01-01

    In a collaboration the size of CMS (approx. 3000 users, and almost 100 computing centres of varying size) communication and accurate information about the sites it has access to is vital in co-ordinating the multitude of computing tasks required for smooth running. SiteDB is a tool developed by CMS to track sites available to the collaboration, the allocation to CMS of resources available at those sites and the associations between CMS members and the sites (as either a manager/operator of the site or a member of a group associated to the site). It is used to track the roles a person has for an associated site or group. SiteDB eases the coordination load for the operations teams by providing a consistent interface to manage communication with the people working at a site, by identifying who is responsible for a given task or service at a site and by offering a uniform interface to information on CMS contacts and sites. SiteDB provides api's and reports for other CMS tools to use to access the information it contains, for instance enabling CRAB to use 'user friendly' names when black/white listing CE's, providing role based authentication and authorisation for other web based services and populating various troubleshooting squads in external ticketing systems in use daily by CMS Computing operations.

  14. Evolution of CMS Workload Management Towards Multicore Job Support

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Calero Yzquierdo, A. [Madrid, CIEMAT; Hernández, J. M. [Madrid, CIEMAT; Khan, F. A. [Quaid-i-Azam U.; Letts, J. [UC, San Diego; Majewski, K. [Fermilab; Rodrigues, A. M. [Fermilab; McCrea, A. [UC, San Diego; Vaandering, E. [Fermilab

    2015-12-23

    The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single and multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.

  15. dCache data storage system implementations at a Tier-2 centre

    Energy Technology Data Exchange (ETDEWEB)

    Tsigenov, Oleg; Nowack, Andreas; Kress, Thomas [III. Physikalisches Institut B, RWTH Aachen (Germany)

    2009-07-01

    The experimental high energy physics groups of the RWTH Aachen University operate one of the largest Grid Tier-2 sites in the world and offer more than 2000 modern CPU cores and about 550 TB of disk space mainly to the CMS experiment and to a lesser extent to the Auger and Icecube collaborations.Running such a large data cluster requires a flexible storage system with high performance. We use dCache for this purpose and are integrated into the dCache support team to the benefit of the German Grid sites. Recently, a storage pre-production cluster has been built to study the setup and the behavior of novel dCache features within Chimera without interfering with the production system. This talk gives an overview about the practical experience gained with dCache on both the production and the testbed cluster and discusses future plans.

  16. Stability and Scalability of the CMS Global Pool: Pushing HTCondor and GlideinWMS to New Limits

    Energy Technology Data Exchange (ETDEWEB)

    Balcas, J. [Caltech; Bockelman, B. [Nebraska U.; Hufnagel, D. [Fermilab; Hurtado Anampa, K. [Notre Dame U.; Aftab Khan, F. [NCP, Islamabad; Larson, K. [Fermilab; Letts, J. [UC, San Diego; Marra da Silva, J. [Sao Paulo, IFT; Mascheroni, M. [Fermilab; Mason, D. [Fermilab; Perez-Calero Yzquierdo, A. [Madrid, CIEMAT; Tiradani, A. [Fermilab

    2017-11-22

    The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. These resources are becoming more diverse in their accessibility and configuration over time. Furthermore, the challenge of stably running at higher and higher scales while introducing new modes of operation such as multi-core pilots, as well as the chaotic nature of physics analysis workflows, places huge strains on the submission infrastructure. This paper details some of the most important challenges to scalability and stability that the CMS Global Pool has faced since the beginning of the LHC Run II and how they were overcome.

  17. Implementing data placement strategies for the CMS experiment based on a popularity mode

    CERN Multimedia

    CERN. Geneva; Barreiro Megino, Fernando Harald

    2012-01-01

    During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of data and processed and analyzed it on the distributed, multi-tiered computing infrastructure on the WorldWide LHC Computing Grid. Given the increasing data volume that has to be stored and efficiently analyzed, it is a challenge for several LHC experiments to optimize and automate the data placement strategies in order to fully profit of the available network and storage resources and to facilitate daily computing operations. Building on previous experience acquired by ATLAS, we have developed the CMS Popularity Service that tracks file accesses and user activity on the grid and will serve as the foundation for the evolution of their data placement. A fully automated, popularity-based site-cleaning agent has been deployed in order to scan Tier2 sites that are reaching their space quota and suggest obsolete, unused data that can be safely deleted without disrupting analysis activity. Future work will be to demons...

  18. Implementing data placement strategies for the CMS experiment based on a popularity model

    CERN Document Server

    Giordano, Domenico

    2012-01-01

    During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of data and processed and analyzed it on the distributed, multi-tiered computing infrastructure on the WorldWide LHC Computing Grid. Given the increasing data volume that has to be stored and efficiently analyzed, it is a challenge for several LHC experiments to optimize and automate the data placement strategies in order to fully profit of the available network and storage resources and to facilitate daily computing operations. Building on previous experience acquired by ATLAS, we have developed the CMS Popularity Service that tracks file accesses and user activity on the grid and will serve as the foundation for the evolution of their data placement. A fully automated, popularity-based site-cleaning agent has been deployed in order to scan Tier-2 sites that are reaching their space quota and suggest obsolete, unused data that can be safely deleted without disrupting analysis activity. Future work will be to demonst...

  19. Spanish ATLAS Tier-1 &Tier-2 perspective on computing over the next years

    CERN Document Server

    Gonzalez de la Hoz, Santiago; The ATLAS collaboration

    2018-01-01

    Since the beginning of the WLCG Project the Spanish ATLAS computer centres have contributed with reliable and stable resources as well as personnel for the ATLAS Collaboration. Our contribution to the ATLAS Tier2s and Tier1s computing resources (disk and CPUs) in the last 10 years has been around 5%, even though the Spanish contribution to the ATLAS detector construction as well as the number of authors are both close to 3%. In 2015 an international advisory committee recommended to revise our contribution according to the participation in the ATLAS experiment. With this scenario, we are optimising the federation of three sites located in Barcelona, Madrid and Valencia, taking into account that the ATLAS collaboration has developed workflows and tools to flexibly use all the resources available to the collaboration, where the Tiered structure is somehow vanishing. In this contribution, we would like to show the evolution and technical updates in the ATLAS Spanish Federated Tier2 and Tier1. Some developments w...

  20. Unified storage systems for distributed Tier-2 centres

    International Nuclear Information System (INIS)

    Cowan, G A; Stewart, G A; Elwell, A

    2008-01-01

    The start of data taking at the Large Hadron Collider will herald a new era in data volumes and distributed processing in particle physics. Data volumes of hundreds of Terabytes will be shipped to Tier-2 centres for analysis by the LHC experiments using the Worldwide LHC Computing Grid (WLCG). In many countries Tier-2 centres are distributed between a number of institutes, e.g., the geographically spread Tier-2s of GridPP in the UK. This presents a number of challenges for experiments to utilise these centres efficaciously, as CPU and storage resources may be subdivided and exposed in smaller units than the experiment would ideally want to work with. In addition, unhelpful mismatches between storage and CPU at the individual centres may be seen, which make efficient exploitation of a Tier-2's resources difficult. One method of addressing this is to unify the storage across a distributed Tier-2, presenting the centres' aggregated storage as a single system. This greatly simplifies data management for the VO, which then can access a greater amount of data across the Tier-2. However, such an approach will lead to scenarios where analysis jobs on one site's batch system must access data hosted on another site. We investigate this situation using the Glasgow and Edinburgh clusters, which are part of the ScotGrid distributed Tier-2. In particular we look at how to mitigate the problems associated with 'distant' data access and discuss the security implications of having LAN access protocols traverse the WAN between centres

  1. CMS analysis operations

    International Nuclear Information System (INIS)

    Andreeva, J; Maier, G; Spiga, D; Calloni, M; Colling, D; Fanzago, F; D'Hondt, J; Maes, J; Van Mulders, P; Villella, I; Klem, J; Letts, J; Padhi, S; Sarkar, S

    2010-01-01

    During normal data taking CMS expects to support potentially as many as 2000 analysis users. Since the beginning of 2008 there have been more than 800 individuals who submitted a remote analysis job to the CMS computing infrastructure. The bulk of these users will be supported at the over 40 CMS Tier-2 centres. Supporting a globally distributed community of users on a globally distributed set of computing clusters is a task that requires reconsidering the normal methods of user support for Analysis Operations. In 2008 CMS formed an Analysis Support Task Force in preparation for large-scale physics analysis activities. The charge of the task force was to evaluate the available support tools, the user support techniques, and the direct feedback of users with the goal of improving the success rate and user experience when utilizing the distributed computing environment. The task force determined the tools needed to assess and reduce the number of non-zero exit code applications submitted through the grid interfaces and worked with the CMS experiment dashboard developers to obtain the necessary information to quickly and proactively identify issues with user jobs and data sets hosted at various sites. Results of the analysis group surveys were compiled. Reference platforms for testing and debugging problems were established in various geographic regions. The task force also assessed the resources needed to make the transition to a permanent Analysis Operations task. In this presentation the results of the task force will be discussed as well as the CMS Analysis Operations plans for the start of data taking.

  2. Optimization of Italian CMS computing centers via MIUR funded research projects

    International Nuclear Information System (INIS)

    Boccali, T; Mazzoni, E; Donvito, G; Pompili, A; Ricca, G Della; Talamo, I; Argiro, S; Grandi, C; Bonacorsi, D; Lista, L; Fabozzi, F; Barone, L M; Santocchia, A; Riahi, H; Tricomi, A; Sgaravatto, M; Maron, G

    2014-01-01

    In 2012, 14 Italian Institutions participating LHC Experiments (10 in CMS) have won a grant from the Italian Ministry of Research (MIUR), to optimize Analysis activities and in general the Tier2/Tier3 infrastructure. A large range of activities is actively carried on: they cover data distribution over WAN, dynamic provisioning for both scheduled and interactive processing, design and development of tools for distributed data analysis, and tests on the porting of CMS software stack to new highly performing / low power architectures.

  3. ATLAS off-Grid sites (Tier 3) monitoring. From local fabric monitoring to global overview of the VO computing activities

    CERN Document Server

    PETROSYAN, A; The ATLAS collaboration; BELOV, S; ANDREEVA, J; KADOCHNIKOV, I

    2012-01-01

    The ATLAS Distributed Computing activities have so far concentrated in the "central" part of the experiment computing system, namely the first 3 tiers (the CERN Tier0, 10 Tier1 centers and over 60 Tier2 sites). Many ATLAS Institutes and National Communities have deployed (or intend to) deploy Tier-3 facilities. Tier-3 centers consist of non-pledged resources, which are usually dedicated to data analysis tasks by the geographically close or local scientific groups, and which usually comprise a range of architectures without Grid middleware. Therefore a substantial part of the ATLAS monitoring tools which make use of Grid middleware, cannot be used for a large fraction of Tier3 sites. The presentation will describe the T3mon project, which aims to develop a software suite for monitoring the Tier3 sites, both from the perspective of the local site administrator and that of the ATLAS VO, thereby enabling the global view of the contribution from Tier3 sites to the ATLAS computing activities. Special attention in p...

  4. Debugging data transfers in CMS

    International Nuclear Information System (INIS)

    Bagliesi, G; Belforte, S; Bloom, K; Bockelman, B; Bonacorsi, D; Fisk, I; Flix, J; Hernandez, J; D'Hondt, J; Maes, J; Kadastik, M; Klem, J; Kodolova, O; Kuo, C-M; Letts, J; Magini, N; Metson, S; Piedra, J; Pukhaeva, N; Tuura, L

    2010-01-01

    The CMS experiment at CERN is preparing for LHC data taking in several computing preparation activities. In early 2007 a traffic load generator infrastructure for distributed data transfer tests was designed and deployed to equip the WLCG tiers which support the CMS virtual organization with a means for debugging, load-testing and commissioning data transfer routes among CMS computing centres. The LoadTest is based upon PhEDEx as a reliable, scalable data set replication system. The Debugging Data Transfers (DDT) task force was created to coordinate the debugging of the data transfer links. The task force aimed to commission most crucial transfer routes among CMS tiers by designing and enforcing a clear procedure to debug problematic links. Such procedure aimed to move a link from a debugging phase in a separate and independent environment to a production environment when a set of agreed conditions are achieved for that link. The goal was to deliver one by one working transfer routes to the CMS data operations team. The preparation, activities and experience of the DDT task force within the CMS experiment are discussed. Common technical problems and challenges encountered during the lifetime of the taskforce in debugging data transfer links in CMS are explained and summarized.

  5. Distributing CMS Data between the Florida T2 and T3 Centers using Lustre and Xrootd-fs

    International Nuclear Information System (INIS)

    Kaganas, G; Rodriguez, J L; Cheng, M; Avery, P; Bourilkov, D; Fu, Y; Palencia, J

    2014-01-01

    We have developed remote data access for large volumes of data over the Wide Area Network based on the Lustre filesystem and Kerberos authentication for security. In this paper we explore a prototype for two-step data access from worker nodes at Florida Tier3 centers, located behind a firewall and using a private network, to data hosted on the Lustre filesystem at the University of Florida CMS Tier2 center. At the Tier3 center we use a client which mounts securely the Lustre filesystem and hosts an XrootD server. The worker nodes access the data from the Tier3 client using POSIX compliant tools via the XrootD-fs filesystem. We perform scalability tests with up to 200 jobs running in parallel on the Tier3 worker nodes.

  6. Simulation of the job processing performance at an ALICE Tier-2 site with MONARC

    International Nuclear Information System (INIS)

    Zach, C; Adamová, D; Betev, L

    2011-01-01

    The MONARC (MOdels of Networked Analysis at Regional Centers) framework has been developed and designed with the aim to provide a tool for realistic simulations of large scale distributed computing systems, with a special focus on the Grid systems of the experiments at the CERN LHC. In this paper, we describe a usage of the MONARC framework and tools for a simulation of the job processing performance at an ALICE Tier-2 site.

  7. Optimization of HEP Analysis Activities Using a Tier2 Infrastructure

    International Nuclear Information System (INIS)

    Arezzini, S; Bagliesi, G; Boccali, T; Ciampa, A; Mazzoni, E; Coscetti, S; Sarkar, S; Taneja, S

    2012-01-01

    While the model for a Tier2 is well understood and implemented within the HEP Community, a refined design for Analysis specific sites has not been agreed upon as clearly. We aim to describe the solutions adopted at the INFN Pisa, the biggest Tier2 in the Italian HEP Community. A Standard Tier2 infrastructure is optimized for Grid CPU and Storage access, while a more interactive oriented use of the resources is beneficial to the final data analysis step. In this step, POSIX file storage access is easier for the average physicist, and has to be provided in a real or emulated way. Modern analysis techniques use advanced statistical tools (like RooFit and RooStat), which can make use of multi core systems. The infrastructure has to provide or create on demand computing nodes with many cores available, above the existing and less elastic Tier2 flat CPU infrastructure. At last, the users do not want to have to deal with data placement policies at the various sites, and hence a transparent WAN file access, again with a POSIX layer, must be provided, making use of the soon-to-be-installed 10 Gbit/s regional lines. Even if standalone systems with such features are possible and exist, the implementation of an Analysis site as a virtual layer over an existing Tier2 requires novel solutions; the ones used in Pisa are described here.

  8. Prototype for a generic thin-client remote analysis environment for CMS

    International Nuclear Information System (INIS)

    Steenberg, C.D.; Bunn, J.J.; Hickey, T.M.; Holtman, K.; Legrand, I.; Litvin, V.; Newman, H.B.; Samar, A.; Singh, S.; Wilkinson, R.

    2001-01-01

    The multi-tiered architecture of the highly-distributed CMS computing systems necessitates a flexible data distribution and analysis environment. The authors describe a prototype analysis environment which functions efficiently over wide area networks using a server installed at the Caltech/UCSD Tier 2 prototype to analyze CMS data stored at various locations using a thin client. The analysis environment is based on existing HEP (Anaphe) and CMS (CARF, ORCA, IGUANA) software technology on the server accessed from a variety of clients. A Java Analysis Studio (JAS, from SLAC) plug-in is being developed as a reference client. The server is operated as a 'black box' on the proto-Tier2 system. ORCA objectivity databases (e.g. an existing large CMS Muon sample) are hosted on the master and slave nodes, and remote clients can request processing of queries across the server nodes, and get the histogram results returned and rendered in the client. The server is implemented using pure C++, and use XML-RPC as a language-neutral transport. This has several benefits, including much better scalability, better integration with CARF-ORCA, and importantly, makes the work directly useful to other non-Java general-purpose analysis and presentation tools such as Hippodraw, Lizard, or ROOT

  9. Spanish ATLAS Tier-2: facing up to LHC Run 2

    CERN Document Server

    Gonzalez de la Hoz, Santiago; Fassi, Farida; Fernandez Casani, Alvaro; Kaci, Mohammed; Lacort Pellicer, Victor Ruben; Montiel Gonzalez, Almudena Del Rocio; Oliver Garcia, Elena; Pacheco Pages, Andres; Sánchez, Javier; Sanchez Martinez, Victoria; Salt, José; Villaplana Perez, Miguel

    2015-01-01

    The goal of this work is to describe the way of addressing the main challenges of Run-2 by the Spanish ATLAS Tier-2. The considerable increase of energy and luminosity for the upcoming Run-2 with respect to Run-1 has led to a revision of the ATLAS computing model as well as some of the main ATLAS computing tools. The adaptation on these changes will be shown, with the peculiarities that it is a distributed Tier-2 composed of three sites and its members are involved on ATLAS computing tasks with a hub of research, innovation and education.

  10. A Two-Tiered Model for Analyzing Library Web Site Usage Statistics, Part 1: Web Server Logs.

    Science.gov (United States)

    Cohen, Laura B.

    2003-01-01

    Proposes a two-tiered model for analyzing web site usage statistics for academic libraries: one tier for library administrators that analyzes measures indicating library use, and a second tier for web site managers that analyzes measures aiding in server maintenance and site design. Discusses the technology of web site usage statistics, and…

  11. Implementing data placement strategies for the CMS experiment based on a popularity model

    International Nuclear Information System (INIS)

    Barreiro Megino, F H; Cinquilli, M; Giordano, D; Karavakis, E; Girone, M; Magini, N; Mancinelli, V; Spiga, D

    2012-01-01

    During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of data and processed and analyzed it on the distributed, multi-tiered computing infrastructure on the WorldWide LHC Computing Grid. Given the increasing data volume that has to be stored and efficiently analyzed, it is a challenge for several LHC experiments to optimize and automate the data placement strategies in order to fully profit of the available network and storage resources and to facilitate daily computing operations. Building on previous experience acquired by ATLAS, we have developed the CMS Popularity Service that tracks file accesses and user activity on the grid and will serve as the foundation for the evolution of their data placement. A fully automated, popularity-based site-cleaning agent has been deployed in order to scan Tier-2 sites that are reaching their space quota and suggest obsolete, unused data that can be safely deleted without disrupting analysis activity. Future work will be to demonstrate dynamic data placement functionality based on this popularity service and integrate it in the data and workload management systems: as a consequence the pre-placement of data will be minimized and additional replication of hot datasets will be requested automatically. This paper will give an insight into the development, validation and production process and will analyze how the framework has influenced resource optimization and daily operations in CMS.

  12. ATLAS Tier-2 monitoring system for the German cloud

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Joerg; Quadt, Arnulf; Weber, Pavel [II. Physikalisches Institut, Georg-August-Universitaet, Goettingen (Germany)

    2011-07-01

    The ATLAS tier centers in Germany provide their computing resources for the ATLAS experiment. The stable and sustainable operation of this so-called DE-cloud heavily relies on effective monitoring of the Tier-1 center GridKa and its associated Tier-2 centers. Central and local grid information services constantly collect and publish the status information from many computing resources and sites. The cloud monitoring system discussed in this presentation evaluates the information related to different cloud resources and provides a coherent and comprehensive view of the cloud. The main monitoring areas covered by the tool are data transfers, cloud software installation, site batch systems, Service Availability Monitoring (SAM). The cloud monitoring system consists of an Apache-based Python application, which retrieves the information and publishes it on the generated HTML web page. This results in an easy-to-use web interface for the limited number of sites in the cloud with fast and efficient access to the required information starting from a high level summary for the whole cloud to detailed diagnostics for the single site services. This approach provides the efficient identification of correlated site problems and simplifies the administration on both cloud and site level.

  13. Experience running a distributed Tier-2 in Spain for the ATLAS experiment

    International Nuclear Information System (INIS)

    March, L; Hoz, S Gonzales de la; Kaci, M; Fassi, F; Fernandez, A; Lamas, A; Salt, J; Sanchez, J; Peso, J del; Fernandez, P; Munoz, L; Pardo, J; Espinal, X; Garitaonandia, H; Mir, M L; Nadal, J; Pacheco, A; Shuskov, S

    2008-01-01

    The main role of the Tier-2s is to provide computing resources for production of physics simulated events and distributed data analysis. The Spanish ATLAS Tier-2 is geographically distributed among three HEP institutes: IFAE (Barcelona), IFIC (Valencia) and UAM (Madrid). Currently it has a computing power of 430 kSI2K CPU, a disk storage capacity of 87 TB and a network bandwidth, connecting the three sites and the nearest Tier-1 (PIC), of 1 Gb/s. These resources will be increased according to the ATLAS Computing Model with time in parallel to those of all ATLAS Tier-2s. Since 2002, it has been participating into the different Data Challenge exercises. Currently, it is achieving around 1.5% of the whole ATLAS collaboration production in the framework of the Computing System Commissioning exercise. A distributed data management is also arising as an important issue in the daily activities of the Tier-2. The distribution in three sites has shown to be useful due to an increasing service redundancy, a faster solution of problems, the share of computing expertise and know-how. Experience gained running the distributed Tier-2 in order to be ready at the LHC start-up will be presented

  14. Exploiting Analytics Techniques in CMS Computing Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Bonacorsi, D. [Bologna U.; Kuznetsov, V. [Cornell U.; Magini, N. [Fermilab; Repečka, A. [Vilnius U.; Vaandering, E. [Fermilab

    2017-11-22

    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.

  15. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    Science.gov (United States)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  16. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    International Nuclear Information System (INIS)

    Limosani, Antonio; Boland, Lucien; Crosby, Sean; Huang, Joanna; Sevior, Martin; Coddington, Paul; Zhang, Shunde; Wilson, Ross

    2014-01-01

    The Australian Government is making a $AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  17. Scaling CMS data transfer system for LHC start-up

    International Nuclear Information System (INIS)

    Tuura, L; Bockelman, B; Bonacorsi, D; Egeland, R; Feichtinger, D; Metson, S; Rehn, J

    2008-01-01

    The CMS experiment will need to sustain uninterrupted high reliability, high throughput and very diverse data transfer activities as the LHC operations start. PhEDEx, the CMS data transfer system, will be responsible for the full range of the transfer needs of the experiment. Covering the entire spectrum is a demanding task: from the critical high-throughput transfers between CERN and the Tier-1 centres, to high-scale production transfers among the Tier-1 and Tier-2 centres, to managing the 24/7 transfers among all the 170 institutions in CMS and to providing straightforward access to handful of files to individual physicists. In order to produce the system with confirmed capability to meet the objectives, the PhEDEx data transfer system has undergone rigourous development and numerous demanding scale tests. We have sustained production transfers exceeding 1 PB/month for several months and have demonstrated core system capacity several orders of magnitude above expected LHC levels. We describe the level of scalability reached, and how we got there, with focus on the main insights into developing a robust, lock-free and scalable distributed database application, the validation stress test methods we have used, and the development and testing tools we found practically useful

  18. The JINR Tier1 Site Simulation for Research and Development Purposes

    Directory of Open Access Journals (Sweden)

    Korenkov V.

    2016-01-01

    A system for grid and cloud services simulation is developed at LIT (JINR, Dubna. This simulation system is focused on improving the effciency of the grid/cloud structures development by using the work quality indicators of some real system. The development of such kind of software is very important for making a new grid/cloud infrastructure for such big scientific experiments like the JINR Tier1 site for WLCG. The simulation of some processes of the Tier1 site is considered as an example of our application approach.

  19. Spanish ATLAS Tier-2 facing up to Run-2 period of LHC

    CERN Document Server

    Gonzalez de la Hoz, Santiago; The ATLAS collaboration; Fassi, Farida; Fernandez Casani, Alvaro; Kaci, Mohammed; Lacort Pellicer, Victor Ruben; Montiel Gonzalez, Almudena Del Rocio; Oliver Garcia, Elena; Pacheco Pages, Andres; Salt, José; Villaplana Perez, Miguel; Sanchez Martinez, Victoria; Sánchez, Javier

    2015-01-01

    The goal of this work is to describe the way of addressing the main challenges of Run-2 by the Spanish ATLAS Tier-2. The considerable increase of energy and luminosity for the upcoming Run-2 w.r.t. Run-1 has led to a revision of the ATLAS computing model as well as some of the main ATLAS computing tools. The adaptation to these changes will be shown, with the peculiarities that it is a distributed Tier-2 composed of three sites and its members are involved on ATLAS computing tasks with a hub of research, innovation and education.

  20. submitter Studies of CMS data access patterns with machine learning techniques

    CERN Document Server

    De Luca, Silvia

    This thesis presents a study of the Grid data access patterns in distributed analysis in the CMS experiment at the LHC accelerator. This study ranges from the deep analysis of the historical patterns of access to the most relevant data types in CMS, to the exploitation of a supervised Machine Learning classification system to set-up a machinery able to eventually predict future data access patterns - i.e. the so-called dataset “popularity” of the CMS datasets on the Grid - with focus on specific data types. All the CMS workflows run on the Worldwide LHC Computing Grid (WCG) computing centers (Tiers), and in particular the distributed analysis systems sustains hundreds of users and applications submitted every day. These applications (or “jobs”) access different data types hosted on disk storage systems at a large set of WLCG Tiers. The detailed study of how this data is accessed, in terms of data types, hosting Tiers, and different time periods, allows to gain precious insight on storage occupancy ove...

  1. Visits to Tier-1 Computing Centres

    CERN Multimedia

    Dario Barberis

    At the beginning of 2007 it became clear that an enhanced level of communication is needed between the ATLAS computing organisation and the Tier-1 centres. Most usual meetings are ATLAS-centric and cannot address the issues of each Tier-1; therefore we decided to organise a series of visits to the Tier-1 centres and focus on site issues. For us, ATLAS computing management, it is most useful to realize how each Tier-1 centre is organised, and its relation to the associated Tier-2s; indeed their presence at these visits is also very useful. We hope it is also useful for sites... at least, we are told so! The usual participation includes, from the ATLAS side: computing management, operations, data placement, resources, accounting and database deployment coordinators; and from the Tier-1 side: computer centre management, system managers, Grid infrastructure people, network, storage and database experts, local ATLAS liaison people and representatives of the associated Tier-2s. Visiting Tier-1 centres (1-4). ...

  2. Developing the Capacity to Implement Tier 2 and Tier 3 Supports: How Do We Support Our Faculty and Staff in Preparing for Sustainability?

    Science.gov (United States)

    Oakes, Wendy Peia; Lane, Kathleen Lynne; Germer, Kathryn A.

    2014-01-01

    School-site and district-level leadership teams rely on the existing knowledge base to select, implement, and evaluate evidence-based practices meeting students' multiple needs within the context of multitiered systems of support. The authors focus on the stages of implementation science as applied to Tier 2 and Tier 3 supports; the…

  3. CMS data quality monitoring web service

    Energy Technology Data Exchange (ETDEWEB)

    Tuura, L; Eulisse, G [Northeastern University, Boston, MA (United States); Meyer, A, E-mail: lat@cern.c, E-mail: giulio.eulisse@cern.c, E-mail: andreas.meyer@cern.c [DESY, Hamburg (Germany)

    2010-04-01

    A central component of the data quality monitoring system of the CMS experiment at the Large Hadron Collider is a web site for browsing data quality histograms. The production servers in data taking provide access to several hundred thousand histograms per run, both live in online as well as for up to several terabytes of archived histograms for the online data taking, Tier-0 prompt reconstruction, prompt calibration and analysis activities, for re-reconstruction at Tier-1s and for release validation. At the present usage level the servers currently handle in total around a million authenticated HTTP requests per day. We describe the main features and components of the system, our implementation for web-based interactive rendering, and the server design. We give an overview of the deployment and maintenance procedures. We discuss the main technical challenges and our solutions to them, with emphasis on functionality, long-term robustness and performance.

  4. CMS data quality monitoring web service

    International Nuclear Information System (INIS)

    Tuura, L; Eulisse, G; Meyer, A

    2010-01-01

    A central component of the data quality monitoring system of the CMS experiment at the Large Hadron Collider is a web site for browsing data quality histograms. The production servers in data taking provide access to several hundred thousand histograms per run, both live in online as well as for up to several terabytes of archived histograms for the online data taking, Tier-0 prompt reconstruction, prompt calibration and analysis activities, for re-reconstruction at Tier-1s and for release validation. At the present usage level the servers currently handle in total around a million authenticated HTTP requests per day. We describe the main features and components of the system, our implementation for web-based interactive rendering, and the server design. We give an overview of the deployment and maintenance procedures. We discuss the main technical challenges and our solutions to them, with emphasis on functionality, long-term robustness and performance.

  5. Site in a box: Improving the Tier 3 experience

    Science.gov (United States)

    Dost, J. M.; Fajardo, E. M.; Jones, T. R.; Martin, T.; Tadel, A.; Tadel, M.; Würthwein, F.

    2017-10-01

    The Pacific Research Platform is an initiative to interconnect Science DMZs between campuses across the West Coast of the United States over a 100 gbps network. The LHC @ UC is a proof of concept pilot project that focuses on interconnecting 6 University of California campuses. It is spearheaded by computing specialists from the UCSD Tier 2 Center in collaboration with the San Diego Supercomputer Center. A machine has been shipped to each campus extending the concept of the Data Transfer Node to a cluster in a box that is fully integrated into the local compute, storage, and networking infrastructure. The node contains a full HTCondor batch system, and also an XRootD proxy cache. User jobs routed to the DTN can run on 40 additional slots provided by the machine, and can also flock to a common GlideinWMS pilot pool, which sends jobs out to any of the participating UCs, as well as to Comet, the new supercomputer at SDSC. In addition, a common XRootD federation has been created to interconnect the UCs and give the ability to arbitrarily export data from the home university, to make it available wherever the jobs run. The UC level federation also statically redirects to either the ATLAS FAX or CMS AAA federation respectively to make globally published datasets available, depending on end user VO membership credentials. XRootD read operations from the federation transfer through the nearest DTN proxy cache located at the site where the jobs run. This reduces wide area network overhead for subsequent accesses, and improves overall read performance. Details on the technical implementation, challenges faced and overcome in setting up the infrastructure, and an analysis of usage patterns and system scalability will be presented.

  6. Tier-3 Monitoring Software Suite (T3MON) proposal

    CERN Document Server

    Andreeva, J; The ATLAS collaboration; Klimentov, A; Korenkov, V; Oleynik, D; Panitkin, S; Petrosyan, A

    2011-01-01

    The ATLAS Distributed Computing activities concentrated so far in the “central” part of the computing system of the experiment, namely the first 3 tiers (CERN Tier0, the 10 Tier1s centres and the 60+ Tier2s). This is a coherent system to perform data processing and management on a global scale and host (re)processing, simulation activities down to group and user analysis. Many ATLAS Institutes and National Communities built (or have plans to build) Tier-3 facilities. The definition of Tier-3 concept has been outlined (REFERENCE). Tier-3 centres consist of non-pledged resources mostly dedicated for the data analysis by the geographically close or local scientific groups. Tier-3 sites comprise a range of architectures and many do not possess Grid middleware, which would render application of Tier-2 monitoring systems useless. This document describes a strategy to develop a software suite for monitoring of the Tier3 sites. This software suite will enable local monitoring of the Tier3 sites and the global vie...

  7. Vaccine-Elicited Tier 2 HIV-1 Neutralizing Antibodies Bind to Quaternary Epitopes Involving Glycan-Deficient Patches Proximal to the CD4 Binding Site.

    Directory of Open Access Journals (Sweden)

    Ema T Crooks

    2015-05-01

    Full Text Available Eliciting broad tier 2 neutralizing antibodies (nAbs is a major goal of HIV-1 vaccine research. Here we investigated the ability of native, membrane-expressed JR-FL Env trimers to elicit nAbs. Unusually potent nAb titers developed in 2 of 8 rabbits immunized with virus-like particles (VLPs expressing trimers (trimer VLP sera and in 1 of 20 rabbits immunized with DNA expressing native Env trimer, followed by a protein boost (DNA trimer sera. All 3 sera neutralized via quaternary epitopes and exploited natural gaps in the glycan defenses of the second conserved region of JR-FL gp120. Specifically, trimer VLP sera took advantage of the unusual absence of a glycan at residue 197 (present in 98.7% of Envs. Intriguingly, removing the N197 glycan (with no loss of tier 2 phenotype rendered 50% or 16.7% (n = 18 of clade B tier 2 isolates sensitive to the two trimer VLP sera, showing broad neutralization via the surface masked by the N197 glycan. Neutralizing sera targeted epitopes that overlap with the CD4 binding site, consistent with the role of the N197 glycan in a putative "glycan fence" that limits access to this region. A bioinformatics analysis suggested shared features of one of the trimer VLP sera and monoclonal antibody PG9, consistent with its trimer-dependency. The neutralizing DNA trimer serum took advantage of the absence of a glycan at residue 230, also proximal to the CD4 binding site and suggesting an epitope similar to that of monoclonal antibody 8ANC195, albeit lacking tier 2 breadth. Taken together, our data show for the first time that strain-specific holes in the glycan fence can allow the development of tier 2 neutralizing antibodies to native spikes. Moreover, cross-neutralization can occur in the absence of protecting glycan. Overall, our observations provide new insights that may inform the future development of a neutralizing antibody vaccine.

  8. The CMS dataset bookkeeping service

    Science.gov (United States)

    Afaq, A.; Dolgert, A.; Guo, Y.; Jones, C.; Kosyakov, S.; Kuznetsov, V.; Lueking, L.; Riley, D.; Sekhri, V.

    2008-07-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  9. The CMS dataset bookkeeping service

    Energy Technology Data Exchange (ETDEWEB)

    Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V [Fermilab, Batavia, Illinois 60510 (United States); Dolgert, A; Jones, C; Kuznetsov, V; Riley, D [Cornell University, Ithaca, New York 14850 (United States)

    2008-07-15

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  10. The CMS dataset bookkeeping service

    International Nuclear Information System (INIS)

    Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V; Dolgert, A; Jones, C; Kuznetsov, V; Riley, D

    2008-01-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems

  11. The CMS dataset bookkeeping service

    International Nuclear Information System (INIS)

    Afaq, Anzar; Dolgert, Andrew; Guo, Yuyi; Jones, Chris; Kosyakov, Sergey; Kuznetsov, Valentin; Lueking, Lee; Riley, Dan; Sekhri, Vijay

    2007-01-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems

  12. Elastic Extension of a CMS Computing Centre Resources on External Clouds

    Science.gov (United States)

    Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.

    2016-10-01

    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.

  13. A New Information Architecture, Web Site and Services for the CMS Experiment

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The age and size of the CMS collaboration at the LHC means it now has many hundreds of inhomogeneous web sites and services and more than 100,000 documents. We describe a major initiative to create a single coherent CMS internal and public web site. This uses the Drupal web Content Management System (now supported by CERN/IT) on top of a standard LAMP stack (Linux, Apache, MySQL, and php/perl). The new navigation, content and search services are coherently integrated with numerous existing CERN services (CDS, EDMS, Indico, phonebook, Twiki) as well as many CMS internal Web services. We describe the information architecture; the system design, implementation and monitoring; the document and content database; security aspects; and our deployment strategy which ensured continual smooth operation of all systems at all times.

  14. Proceedings of the second workshop of LHC Computing Grid, LCG-France; ACTES, 2e colloque LCG-France

    Energy Technology Data Exchange (ETDEWEB)

    Chollet, Frederique; Hernandez, Fabio; Malek, Fairouz; Gaelle, Shifrin (eds.) [Laboratoire de Physique Corpusculaire Clermont-Ferrand, Campus des Cezeaux, 24, avenue des Landais, Clermont-Ferrand (France)

    2007-03-15

    The second LCG-France Workshop was held in Clermont-Ferrand on 14-15 March 2007. These sessions organized by IN2P3 and DAPNIA were attended by around 70 participants working with the Computing Grid of LHC in France. The workshop was a opportunity of exchanges of information between the French and foreign site representatives on one side and delegates of experiments on the other side. The event allowed enlightening the place of LHC Computing Task within the frame of W-LCG world project, the undergoing actions and the prospects in 2007 and beyond. The following communications were presented: 1. The current status of the LHC computation in France; 2.The LHC Grid infrastructure in France and associated resources; 3.Commissioning of Tier 1; 4.The sites of Tier-2s and Tier-3s; 5.Computing in ALICE experiment; 6.Computing in ATLAS experiment; 7.Computing in the CMS experiments; 8.Computing in the LHCb experiments; 9.Management and operation of computing grids; 10.'The VOs talk to sites'; 11.Peculiarities of ATLAS; 12.Peculiarities of CMS and ALICE; 13.Peculiarities of LHCb; 14.'The sites talk to VOs'; 15. Worldwide operation of Grid; 16.Following-up the Grid jobs; 17.Surveillance and managing the failures; 18. Job scheduling and tuning; 19.Managing the site infrastructure; 20.LCG-France communications; 21.Managing the Grid data; 22.Pointing the net infrastructure and site storage. 23.ALICE bulk transfers; 24.ATLAS bulk transfers; 25.CMS bulk transfers; 26. LHCb bulk transfers; 27.Access to LHCb data; 28.Access to CMS data; 29.Access to ATLAS data; 30.Access to ALICE data; 31.Data analysis centers; 32.D0 Analysis Farm; 33.Some CMS grid analyses; 34.PROOF; 35.Distributed analysis using GANGA; 36.T2 set-up for end-users. In their concluding remarks Fairouz Malek and Dominique Pallin stressed that the current workshop was more close to users while the tasks for tightening the links between the sites and the experiments were definitely achieved. The IN2P3

  15. Network monitoring in the Tier2 site in Prague

    International Nuclear Information System (INIS)

    Eliáš, Marek; Fiala, Lukáš; Horký, Jirí; Chudoba, Jirí; Kouba, Tomáš; Kundrát, Jan; Švec, Jan

    2011-01-01

    Network monitoring provides different types of view on the network traffic. It's output enables computing centre staff to make qualified decisions about changes in the organization of computing centre network and to spot possible problems. In this paper we present network monitoring framework used at Tier-2 in Prague in Institute of Physics (FZU). The framework consists of standard software and custom tools. We discuss our system for hardware failures detection using syslog logging and Nagios active checks, bandwidth monitoring of physical links and analysis of NetFlow exports from Cisco routers. We present tool for automatic detection of network layout based on SNMP. This tool also records topology changes into SVN repository. Adapted weathermap4rrd is used to visualize recorded data to get fast overview showing current bandwidth usage of links in network.

  16. A Tiered Approach to Evaluating Salinity Sources in Water at Oil and Gas Production Sites.

    Science.gov (United States)

    Paquette, Shawn M; Molofsky, Lisa J; Connor, John A; Walker, Kenneth L; Hopkins, Harley; Chakraborty, Ayan

    2017-09-01

    A suspected increase in the salinity of fresh water resources can trigger a site investigation to identify the source(s) of salinity and the extent of any impacts. These investigations can be complicated by the presence of naturally elevated total dissolved solids or chlorides concentrations, multiple potential sources of salinity, and incomplete data and information on both naturally occurring conditions and the characteristics of potential sources. As a result, data evaluation techniques that are effective at one site may not be effective at another. In order to match the complexity of the evaluation effort to the complexity of the specific site, this paper presents a strategic tiered approach that utilizes established techniques for evaluating and identifying the source(s) of salinity in an efficient step-by-step manner. The tiered approach includes: (1) a simple screening process to evaluate whether an impact has occurred and if the source is readily apparent; (2) basic geochemical characterization of the impacted water resource(s) and potential salinity sources coupled with simple visual and statistical data evaluation methods to determine the source(s); and (3) advanced laboratory analyses (e.g., isotopes) and data evaluation methods to identify the source(s) and the extent of salinity impacts where it was not otherwise conclusive. A case study from the U.S. Gulf Coast is presented to illustrate the application of this tiered approach. © 2017, National Ground Water Association.

  17. VM-based infrastructure for simulating different cluster and storage solutions used on ATLAS Tier-3 sites

    International Nuclear Information System (INIS)

    Belov, S; Kadochnikov, I; Korenkov, V; Kutouski, M; Oleynik, D; Petrosyan, A

    2012-01-01

    The current ATLAS Tier-3 infrastructure consists of a variety of sites of different sizes and with a mix of local resource management systems (LRMS) and mass storage system (MSS) implementations. The Tier-3 monitoring suite, having been developed in order to satisfy the needs of Tier-3 site administrators and to aggregate Tier-3 monitoring information on the global VO level, needs to be validated for various combinations of LRMS and MSS solutions along with the corresponding Ganglia plugins. For this purpose the testbed infrastructure, which allows simulation of various computational cluster and storage solutions, had been set up at JINR (Dubna, Russia). This infrastructure provides the ability to run testbeds with various LRMS and MSS implementations, and with the capability to quickly redeploy particular testbeds or their components. Performance of specific components is not a critical issue for development and validation, whereas easy management and deployment are crucial. Therefore virtual machines were chosen for implementation of the validation infrastructure which, though initially developed for Tier-3 monitoring project, can be exploited for other purposes. Load generators for simulation of the computing activities at the farm were developed as a part of this task. The paper will cover concrete implementation, including deployment scenarios, hypervisor details and load simulators.

  18. Monitoring techniques and alarm procedures for CMS Services and Sites in WLCG

    International Nuclear Information System (INIS)

    Molina-Perez, J; Sciabà, A; Magini, N; Bonacorsi, D; Gutsche, O; Flix, J; Kreuzer, P; Fajardo, E; Boccali, T; Klute, M; Gomes, D; Kaselis, R; Butenas, I; Du, R; Wang, W

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS; the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.

  19. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    Energy Technology Data Exchange (ETDEWEB)

    Molina-Perez, J. [UC, San Diego; Bonacorsi, D. [Bologna U.; Gutsche, O. [Fermilab; Sciaba, A. [CERN; Flix, J. [Madrid, CIEMAT; Kreuzer, P. [CERN; Fajardo, E. [Andes U., Bogota; Boccali, T. [INFN, Pisa; Klute, M. [MIT; Gomes, D. [Rio de Janeiro State U.; Kaselis, R. [Vilnius U.; Du, R. [Beijing, Inst. High Energy Phys.; Magini, N. [CERN; Butenas, I. [Vilnius U.; Wang, W. [Beijing, Inst. High Energy Phys.

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.

  20. A tiered approach for probabilistic ecological risk assessment of contaminated sites

    International Nuclear Information System (INIS)

    Zolezzi, M.; Nicolella, C.; Tarazona, J.V.

    2005-01-01

    This paper presents a tiered methodology for probabilistic ecological risk assessment. The proposed approach starts from deterministic comparison (ratio) of single exposure concentration and threshold or safe level calculated from a dose-response relationship, goes through comparison of probabilistic distributions that describe exposure values and toxicological responses of organisms to the chemical of concern, and finally determines the so called distribution-based quotients (DBQs). In order to illustrate the proposed approach, soil concentrations of 1,2,4-trichlorobenzene (1,2,4- TCB) measured in an industrial contaminated site were used for site-specific probabilistic ecological risks assessment. By using probabilistic distributions, the risk, which exceeds a level of concern for soil organisms with the deterministic approach, is associated to the presence of hot spots reaching concentrations able to affect acutely more than 50% of the soil species, while the large majority of the area presents 1,2,4- TCB concentrations below those reported as toxic [it

  1. The CMS CERN Analysis Facility (CAF)

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, O [Imperial College (United Kingdom); Bonacorsi, D [Universita and INFN, Bologna (Italy); Fanzago, F [Universita and INFN, Padova (Italy); Gowdy, S; Malgeri, L; Panzer-Steindel, B; Schwickerath, U; Spiga, D; Toebbicke, Rainer [Conseil Europeen Recherche Nucl. (CERN) Switzerland (Switzerland); Kreuzer, P [Rheinisch-Westfaelische Tech. Hoch. (RWTH) (Germany); Mankel, R [Deutsches Elektronen-Synchrotron (DESY) (Germany); Metson, S [University of Bristol (United Kingdom); Sanches, J Afonso; Teodoro, D, E-mail: Peter.Kreuzer@cern.c [Universidade do Estado do Rio De Janeiro (UERJ) (Brazil)

    2010-04-01

    The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workflows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast-turnaround. In addition to the low latency requirement on the batch farm, another mandatory condition is the efficient access to the RAW detector data stored at the CERN Tier-0 facility. The CMS CAF also foresees resources for interactive login by a large number of CMS collaborators located at CERN, as an entry point for their day-by-day analysis. These resources will run on a separate partition in order to protect the high-priority use-cases described above. While the CMS CAF represents only a modest fraction of the overall CMS resources on the WLCG GRID, an appropriately sized user-support service needs to be provided. We will describe the building, commissioning and operation of the CMS CAF during the year 2008. The facility was heavily and routinely used by almost 250 users during multiple commissioning and data challenge periods. It reached a CPU capacity of 1.4MSI2K and a disk capacity at the Peta byte scale. In particular, we will focus on the performances in terms of networking, disk access and job efficiency and extrapolate prospects for the upcoming LHC first year data taking. We will also present the experience gained and the limitations observed in operating such a large facility, in which well controlled workflows are combined with more chaotic type analysis by a large number of physicists.

  2. The CMS CERN Analysis Facility (CAF)

    International Nuclear Information System (INIS)

    Buchmueller, O; Bonacorsi, D; Fanzago, F; Gowdy, S; Malgeri, L; Panzer-Steindel, B; Schwickerath, U; Spiga, D; Toebbicke, Rainer; Kreuzer, P; Mankel, R; Metson, S; Sanches, J Afonso; Teodoro, D

    2010-01-01

    The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workflows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast-turnaround. In addition to the low latency requirement on the batch farm, another mandatory condition is the efficient access to the RAW detector data stored at the CERN Tier-0 facility. The CMS CAF also foresees resources for interactive login by a large number of CMS collaborators located at CERN, as an entry point for their day-by-day analysis. These resources will run on a separate partition in order to protect the high-priority use-cases described above. While the CMS CAF represents only a modest fraction of the overall CMS resources on the WLCG GRID, an appropriately sized user-support service needs to be provided. We will describe the building, commissioning and operation of the CMS CAF during the year 2008. The facility was heavily and routinely used by almost 250 users during multiple commissioning and data challenge periods. It reached a CPU capacity of 1.4MSI2K and a disk capacity at the Peta byte scale. In particular, we will focus on the performances in terms of networking, disk access and job efficiency and extrapolate prospects for the upcoming LHC first year data taking. We will also present the experience gained and the limitations observed in operating such a large facility, in which well controlled workflows are combined with more chaotic type analysis by a large number of physicists.

  3. CMS : An exceptional load for an exceptional work site

    CERN Multimedia

    2001-01-01

    Components of the CMS vacuum tank have been delivered to the detector assembly site at Cessy. The complete inner shell was delivered to CERN by special convoy while the outer shell is being assembled in situ. The convoy transporting the inner shell of the CMS vacuum tank took a week to cover the distance between Lons-le-Saunier and Point 5 at Cessy. Left: the convoy making its way down from the Col de la Faucille. With lights flashing, flanked by police outriders and with roads temporarily closed, the exceptional load that passed through the Pays de Gex on Monday 20 May was accorded the same VIP treatment as a leading state dignitary. But this time it was not the identity of the passenger but the exceptional size of the object being transported that made such arrangements necessary. A convoy of two lorries was needed to transport the load, an enormous 13-metre long, 6 metre diameter cylinder weighing 120 tonnes. It took a week to cover the 120 kilometres between Lons-le-Saunier and the assembly site for...

  4. 75 FR 73166 - Publication of the Tier 2 Tax Rates

    Science.gov (United States)

    2010-11-29

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service Publication of the Tier 2 Tax Rates AGENCY: Internal Revenue Service, Treasury. ACTION: Notice. SUMMARY: Publication of the tier 2 tax rates for...). Tier 2 taxes on railroad employees, employers, and employee representatives are one source of funding...

  5. 76 FR 71623 - Publication of the Tier 2 Tax Rates

    Science.gov (United States)

    2011-11-18

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service Publication of the Tier 2 Tax Rates AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice. SUMMARY: Publication of the tier 2 tax rates for...). Tier 2 taxes on railroad employees, employers, and employee representatives are one source of funding...

  6. 78 FR 71039 - Publication of the Tier 2 Tax Rates

    Science.gov (United States)

    2013-11-27

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service Publication of the Tier 2 Tax Rates AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice. SUMMARY: Publication of the tier 2 tax rates for...). Tier 2 taxes on railroad employees, employers, and employee representatives are one source of funding...

  7. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted.   CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat a...

  8. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the natu...

  9. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the natur...

  10. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ Management- CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. Management - CB - MB - FB Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2007 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the nature of em¬pl...

  11. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ Management- CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. Management - CB - MB - FB Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2007 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the nature of employment and ...

  12. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the na...

  13. Integrating Amazon EC2 with the CMS Production Framework

    CERN Document Server

    Melo, Andrew Malone

    2011-01-01

    As cloud middleware and cloud providers have become more robust, various experiments with experience in Grid submission have begun to investigate the possibility of taking previously Grid-Enabled applications and making them compatible with Cloud Computing. Successful implementation will allow for dynamic scaling of the available hardware resources, providing access to peak-load handling capabilities and possibly resulting in lower costs to the experiment. Here we discuss current work within the CMS collaboration at the LHC to both perform computation on EC2, both for production and analysis use-cases. We also discuss break-even points between dedicated and cloud resources using real-world costs derived from a CMS site.

  14. Integrating Amazon EC2 with the CMS production framework

    International Nuclear Information System (INIS)

    Melo, Andrew; Sheldon, Paul

    2012-01-01

    As cloud middleware and cloud providers have become more robust, various experiments with experience in Grid submission have begun to investigate the possibility of taking previously Grid-Enabled applications and making them compatible with Cloud Computing. Successful implementation will allow for dynamic scaling of the available hardware resources, providing access to peak-load handling capabilities and possibly resulting in lower costs to the experiment. Here we discuss current work within the CMS collaboration at the LHC to both perform computation on EC2, both for production and analysis use-cases. We also discuss break-even points between dedicated and cloud resources using real-world costs derived from a CMS site.

  15. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the ICMS Web site. The following items can be found on: http://cms.cern.ch/iCMS Management – CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. Management – CB – MB – FB Agendas and minutes are accessible to CMS members through Indico. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2008 Annual Reviews are posted in Indico. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral student upon completion of their theses.  Therefore it is requested that Ph.D students inform the CMS Secretariat about the nature of employment and name of their first employer. The Notes, Conference Reports and Theses published si...

  16. Comprehensive Monitoring for Heterogeneous Geographically Distributed Storage

    Energy Technology Data Exchange (ETDEWEB)

    Ratnikova, N. [Fermilab; Karavakis, E. [CERN; Lammel, S. [Fermilab; Wildish, T. [Princeton U.

    2015-12-23

    Storage capacity at CMS Tier-1 and Tier-2 sites reached over 100 Petabytes in 2014, and will be substantially increased during Run 2 data taking. The allocation of storage for the individual users analysis data, which is not accounted as a centrally managed storage space, will be increased to up to 40%. For comprehensive tracking and monitoring of the storage utilization across all participating sites, CMS developed a space monitoring system, which provides a central view of the geographically dispersed heterogeneous storage systems. The first prototype was deployed at pilot sites in summer 2014, and has been substantially reworked since then. In this paper we discuss the functionality and our experience of system deployment and operation on the full CMS scale.

  17. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    CERN Document Server

    Van der Ster , D; Medrano Llamas, R; Legger , F; Sciaba, A; Sciacca, G; Ubeda Garca , M

    2012-01-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion p...

  18. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion ...

  19. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    International Nuclear Information System (INIS)

    Elmsheuser, Johannes; Legger, Federica; Llamas, Ramón Medrano; Sciabà, Andrea; García, Mario Úbeda; Ster, Daniel van der; Sciacca, Gianfranco

    2012-01-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion policies. A study of the historical test results for ATLAS, CMS and LHCb will be presented, including comparisons between the experiments’ grid availabilities and a search for site-based or temporal failure correlations. Finally, we will look to future plans that will allow users to gain new insights into the test results; these include developments to allow increased testing concurrency, increased scale in the number of metrics recorded per test job (up to hundreds), and increased scale in the historical job information (up to many millions of jobs per VO).

  20. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    Science.gov (United States)

    Elmsheuser, Johannes; Medrano Llamas, Ramón; Legger, Federica; Sciabà, Andrea; Sciacca, Gianfranco; Úbeda García, Mario; van der Ster, Daniel

    2012-12-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion policies. A study of the historical test results for ATLAS, CMS and LHCb will be presented, including comparisons between the experiments’ grid availabilities and a search for site-based or temporal failure correlations. Finally, we will look to future plans that will allow users to gain new insights into the test results; these include developments to allow increased testing concurrency, increased scale in the number of metrics recorded per test job (up to hundreds), and increased scale in the historical job information (up to many millions of jobs per VO).

  1. Analysis facility infrastructure (Tier-3) for ATLAS experiment

    International Nuclear Information System (INIS)

    Gonzalez de la Hoz, S.; March, L.; Ros, E.; Sanchez, J.; Amoros, G.; Fassi, F.; Fernandez, A.; Kaci, M.; Lamas, A.; Salt, J.

    2008-01-01

    In the ATLAS computing model the tiered hierarchy ranged from the Tier-0 (CERN) down to desktops or workstations (Tier-3). The focus on defining the roles of each tiered component has evolved with the initial emphasis on the Tier-0 and Tier-1 definition and roles. The various LHC (Large Hadron Collider) projects, including ATLAS, then evolved the tiered hierarchy to include Tier-2's (Regional centers) as part of their projects. Tier-3 centres, on the other hand, have been defined as whatever an institution could construct to support their Physics goals using institutional and otherwise leveraged resources and therefore have not been considered to be part of the official ATLAS computing resources. However, Tier-3 centres are going to exist and will have implications on how the computing model should support ATLAS physicists. Tier-3 users will want to access LHC data and simulations and will want to enable their resources to support their analysis and simulation work. This document will define how IFIC (Instituto de Fisica Corpuscular de Valencia), after discussing with the ATLAS Tier-3 task force, should interact with the ATLAS computing model, detail the conditions under which Tier-3 centres can expect some level of support and set reasonable expectations for the scope and support of ATLAS Tier-3 sites. (orig.)

  2. CMS 2006 - CMS France days; CMS 2006 les journees CMS FRANCE

    Energy Technology Data Exchange (ETDEWEB)

    Huss, D.; Dobrzynski, L.; Virdee, J.; Boudoule, G.; Fontaine, J.C.; Faure, J.L.; Paganini, P.; Mathez, H.; Gross, L.; Charlot, C.; Trunov, A.; Patois, Y.; Busson, P.; Maire, M.; Berthon, U.; Todorov, T.; Beaudette, F.; Sirois, Y.; Baffioni, S.; Beauceron, S.; Delmeire, E.; Agram, J.L.; Goerlach, U.; Mangeol, D.; Salerno, R.; Bloch, D.; Lassila-Perini, K.; Blaha, J.; Drobychev, G.; Gras, P.; Hagenauer, M.; Denegri, D.; Lounis, A.; Faccio, F.; Lecoq, J

    2006-07-01

    These CMS talks give the opportunity for all the teams working on the CMS (Compact Muon Spectrometer) project to present the status of their works and to exchange ideas. 5 sessions have been organized: 1) CMS status and perspectives, 2) contributions of the different laboratories, 3) software and computation, 4) physics with CMS (particularly the search for the Higgs boson), and 5) electronic needs. This document gathers the slides of the presentations.

  3. Recombination Events Involving the atp9 Gene Are Associated with Male Sterility of CMS PET2 in Sunflower.

    Science.gov (United States)

    Reddemann, Antje; Horn, Renate

    2018-03-11

    Cytoplasmic male sterility (CMS) systems represent ideal mutants to study the role of mitochondria in pollen development. In sunflower, CMS PET2 also has the potential to become an alternative CMS source for commercial sunflower hybrid breeding. CMS PET2 originates from an interspecific cross of H. petiolaris and H. annuus as CMS PET1, but results in a different CMS mechanism. Southern analyses revealed differences for atp6 , atp9 and cob between CMS PET2, CMS PET1 and the male-fertile line HA89. A second identical copy of atp6 was present on an additional CMS PET2-specific fragment. In addition, the atp9 gene was duplicated. However, this duplication was followed by an insertion of 271 bp of unknown origin in the 5' coding region of the atp9 gene in CMS PET2, which led to the creation of two unique open reading frames orf288 and orf231 . The first 53 bp of orf288 are identical to the 5' end of atp9 . Orf231 consists apart from the first 3 bp, being part of the 271-bp-insertion, of the last 228 bp of atp9 . These CMS PET2-specific orfs are co-transcribed. All 11 editing sites of the atp9 gene present in orf231 are fully edited. The anther-specific reduction of the co-transcript in fertility-restored hybrids supports the involvement in male-sterility based on CMS PET2.

  4. Commissioning the CMS Alignment and Calibration Framework

    CERN Document Server

    Futyan, David

    2009-01-01

    The CMS experiment has developed a powerful framework to ensure the precise and prompt alignment and calibration of its components, which is a major prerequisite to achieve the optimal performance for physics analysis. The prompt alignment and calibration strategy harnesses computing resources both at the Tier-0 site and the CERN Analysis Facility (CAF) to ensure fast turnaround for updating the corresponding database payloads. An essential element is the creation of dedicated data streams concentrating the specific event information required by the various alignment and calibration workflows. The resulting low latency is required for feeding the resulting constants into the prompt reconstruction process, which is essential for achieving swift physics analysis of the LHC data. This report discusses the implementation and the computational aspects of the alignment and calibration framework. Recent commissioning campaigns with cosmic muons, beam halo and simulated data have been used to gain detailed experience...

  5. Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

    International Nuclear Information System (INIS)

    Donvito, Giacinto; Italiano, Alessandro; Salomoni, Davide

    2014-01-01

    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post

  6. Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

    Science.gov (United States)

    Donvito, Giacinto; Salomoni, Davide; Italiano, Alessandro

    2014-06-01

    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post

  7. Towards higher reliability of CMS computing facilities

    International Nuclear Information System (INIS)

    Bagliesi, G; Bloom, K; Brew, C; Flix, J; Kreuzer, P; Sciabà, A

    2012-01-01

    The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The Site Readiness monitoring infrastructure has been instrumental in understanding how the system as a whole was improving towards LHC operations, measuring the reliability of sites when running CMS activities, and providing sites with the information they need to troubleshoot any problem. This contribution reviews the complete automation of the Site Readiness program, with the description of monitoring tools and their inclusion into the Site Status Board (SSB), the performance checks, the use of tools like HammerCloud, and the impact in improving the overall reliability of the Grid from the point of view of the CMS computing system. These results are used by CMS to select good sites to conduct workflows, in order to maximize workflows efficiencies. The performance against these tests seen at the sites during the first years of LHC running is as well reviewed.

  8. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    CERN Document Server

    Molina-Perez, Jorge Amando

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS; the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator on duty at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is explo...

  9. Recombination Events Involving the atp9 Gene Are Associated with Male Sterility of CMS PET2 in Sunflower

    Directory of Open Access Journals (Sweden)

    Antje Reddemann

    2018-03-01

    Full Text Available Cytoplasmic male sterility (CMS systems represent ideal mutants to study the role of mitochondria in pollen development. In sunflower, CMS PET2 also has the potential to become an alternative CMS source for commercial sunflower hybrid breeding. CMS PET2 originates from an interspecific cross of H. petiolaris and H. annuus as CMS PET1, but results in a different CMS mechanism. Southern analyses revealed differences for atp6, atp9 and cob between CMS PET2, CMS PET1 and the male-fertile line HA89. A second identical copy of atp6 was present on an additional CMS PET2-specific fragment. In addition, the atp9 gene was duplicated. However, this duplication was followed by an insertion of 271 bp of unknown origin in the 5′ coding region of the atp9 gene in CMS PET2, which led to the creation of two unique open reading frames orf288 and orf231. The first 53 bp of orf288 are identical to the 5′ end of atp9. Orf231 consists apart from the first 3 bp, being part of the 271-bp-insertion, of the last 228 bp of atp9. These CMS PET2-specific orfs are co-transcribed. All 11 editing sites of the atp9 gene present in orf231 are fully edited. The anther-specific reduction of the co-transcript in fertility-restored hybrids supports the involvement in male-sterility based on CMS PET2.

  10. A new era for central processing and production in CMS

    International Nuclear Information System (INIS)

    Fajardo, E; Gutsche, O; Foulkes, S; Linacre, J; Spinoso, V; Lahiff, A; Gomez-Ceballos, G; Klute, M; Mohapatra, A

    2012-01-01

    The goal for CMS computing is to maximise the throughput of simulated event generation while also processing event data generated by the detector as quickly and reliably as possible. To maintain this achievement as the quantity of events increases CMS computing has migrated at the Tier 1 level from its old production framework, ProdAgent, to a new one, WMAgent. The WMAgent framework offers improved processing efficiency and increased resource usage as well as a reduction in operational manpower. In addition to the challenges encountered during the design of the WMAgent framework, several operational issues have arisen during its commissioning. The largest operational challenges were in the usage and monitoring of resources, mainly a result of a change in the way work is allocated. Instead of work being assigned to operators, all work is centrally injected and managed in the Request Manager system and the task of the operators has changed from running individual workflows to monitoring the global workload. In this report we present how we tackled some of the operational challenges, and how we benefitted from the lessons learned in the commissioning of the WMAgent framework at the Tier 2 level in late 2011. As case studies, we will show how the WMAgent system performed during some of the large data reprocessing and Monte Carlo simulation campaigns.

  11. 77 FR 71481 - Publication of the Tier 2 Tax Rates

    Science.gov (United States)

    2012-11-30

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service Publication of the Tier 2 Tax Rates AGENCY... tax rates for calendar year 2013 as required by section 3241(d) of the Internal Revenue Code (26 U.S.C. 3241). Tier 2 taxes on railroad employees, employers, and employee representatives are one source of...

  12. Commissioning the CMS alignment and calibration framework

    International Nuclear Information System (INIS)

    Futyan, David

    2010-01-01

    The CMS experiment has developed a powerful framework to ensure the precise and prompt alignment and calibration of its components, which is a major prerequisite to achieve the optimal performance for physics analysis. The prompt alignment and calibration strategy harnesses computing resources both at the Tier-0 site and the CERN Analysis Facility (CAF) to ensure fast turnaround for updating the corresponding database payloads. An essential element is the creation of dedicated data streams concentrating the specific event information required by the various alignment and calibration workflows. The resulting low latency is required for feeding the resulting constants into the prompt reconstruction process, which is essential for achieving swift physics analysis of the LHC data. This report discusses the implementation and the computational aspects of the alignment and calibration framework. Recent commissioning campaigns with cosmic muons, beam halo and simulated data have been used to gain detailed experience with this framework, and results of this validation are reported.

  13. Tiered Storage For LHC

    CERN Multimedia

    CERN. Geneva; Hanushevsky, Andrew

    2012-01-01

    For more than a year, the ATLAS Western Tier 2 (WT2) at SLAC National Accelerator has been successfully operating a two tiered storage system based on Xrootd's flexible cross-cluster data placement framework, the File Residency Manager. The architecture allows WT2 to provide both, high performance storage at the higher tier to ATLAS analysis jobs, as well as large, low cost disk capacity at the lower tier. Data automatically moves between the two storage tiers based on the needs of analysis jobs and is completely transparent to the jobs.

  14. Analysis Facility infrastructure (TIER3) for ATLAS High Energy physics experiment

    International Nuclear Information System (INIS)

    Gonzalez de la Hoz, S.; March, L.; Ros, E.; Sanchez, J.; Amoros, G.; Fassi, F.; Fernandez, A.; Kaci, M.; Lamas, A.; Salt, J.

    2007-01-01

    ATLAS project has been asked to define the scope and role of Tier-3 resources (facilities or centres) within the existing ATLAS computing model, activities and facilities. This document attempts to address these questions by describing Tier-3 resources generally, and their relationship to the ATLAS Software and Computing Project. Originally the tiered computing model came out of MONARC (see http://monarc.web.cern.ch/MONARC/) work and was predicated upon the network being a scarce resource. In this model the tiered hierarchy ranged from the Tier-0 (CERN) down to the desktop or workstation (Tier 3). The focus on defining the roles of each tiered component has evolved with the initial emphasis on the Tier-0 (CERN) and Tier-1 (National centres) definition and roles. The various LHC projects, including ATLAS, then evolved the tiered hierarchy to include Tier-2s (Regional centers) as part of their projects. Tier-3s, on the other hand, have (implicitly and sometime explicitly) been defined as whatever an institution could construct to support their Physics goals using institutional and otherwise leveraged resources and therefore have not been considered to be part of the official ATLAS Research Program computing resources nor under their control, meaning there is no formal MOU process to designate sites as Tier-3s and no formal control of the program over the Tier-3 resources. Tier-3s are the responsibility of individual institutions to define, fund, deploy and support. However, having noted this, we must also recognize that Tier-3s must exist and will have implications for how our computing model should support ATLAS physicists. Tier-3 users will want to access data and simulations and will want to enable their Tier-3 resources to support their analysis and simulation work. Tiers 3s are an important resource for physicists to analyze LHC (Large Hadron Collider) data. This document will define how Tier-3s should best interact with the ATLAS computing model, detail the

  15. Evolution of the ATLAS data and computing model for a Tier2 in the EGI infrastructure

    CERN Document Server

    Fernández Casaní, A; The ATLAS collaboration; González de la Hoz, S; Salt Cairols, J; Fassi, F; Kaci, M; Lamas, A; Oliver, E; Sánchez, J; Sánchez, V

    2012-01-01

    Since the start of the LHC pp collisions in 2010, the ATLAS computing model has moved from a more strict design, where every Tier2 had a liaison and a network dependence from a Tier1, to a more meshed approach where every cloud could be connected. Evolution of ATLAS data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. It also requires rethinking the network infrastructure to enable any Tier2 and associated Tier3 to easily connect to any Tier1 or Tier2. Tier2s are becoming more and more important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used more efficiently. In this way Tier1s and Tier2s are becoming more equivalent for t...

  16. CMS Space Monitoring

    Science.gov (United States)

    Ratnikova, N.; Huang, C.-H.; Sanchez-Hernandez, A.; Wildish, T.; Zhang, X.

    2014-06-01

    During the first LHC run, CMS stored about one hundred petabytes of data. Storage accounting and monitoring help to meet the challenges of storage management, such as efficient space utilization, fair share between users and groups and resource planning. We present a newly developed CMS space monitoring system based on the storage metadata dumps produced at the sites. The information extracted from the storage dumps is aggregated and uploaded to a central database. A web based data service is provided to retrieve the information for a given time interval and a range of sites, so it can be further aggregated and presented in the desired format. The system has been designed based on the analysis of CMS monitoring requirements and experiences of the other LHC experiments. In this paper, we demonstrate how the existing software components of the CMS data placement system, PhEDEx, have been re-used, dramatically reducing the development effort.

  17. CMS Space Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Ratnikova, N. [Fermilab; Huang, C.-H. [Fermilab; Sanchez-Hernandez, A. [CINVESTAV, IPN; Wildish, T. [Princeton U.; Zhang, X. [Beijing, Inst. High Energy Phys.

    2014-01-01

    During the first LHC run, CMS stored about one hundred petabytes of data. Storage accounting and monitoring help to meet the challenges of storage management, such as efficient space utilization, fair share between users and groups and resource planning. We present a newly developed CMS space monitoring system based on the storage metadata dumps produced at the sites. The information extracted from the storage dumps is aggregated and uploaded to a central database. A web based data service is provided to retrieve the information for a given time interval and a range of sites, so it can be further aggregated and presented in the desired format. The system has been designed based on the analysis of CMS monitoring requirements and experiences of the other LHC experiments. In this paper, we demonstrate how the existing software components of the CMS data placement system, PhEDEx, have been re-used, dramatically reducing the development effort.

  18. CMS 2006 - CMS France days

    International Nuclear Information System (INIS)

    Huss, D.; Dobrzynski, L.; Virdee, J.; Boudoule, G.; Fontaine, J.C.; Faure, J.L.; Paganini, P.; Mathez, H.; Gross, L.; Charlot, C.; Trunov, A.; Patois, Y.; Busson, P.; Maire, M.; Berthon, U.; Todorov, T.; Beaudette, F.; Sirois, Y.; Baffioni, S.; Beauceron, S.; Delmeire, E.; Agram, J.L.; Goerlach, U.; Mangeol, D.; Salerno, R.; Bloch, D.; Lassila-Perini, K.; Blaha, J.; Drobychev, G.; Gras, P.; Hagenauer, M.; Denegri, D.; Lounis, A.; Faccio, F.; Lecoq, J.

    2006-01-01

    These CMS talks give the opportunity for all the teams working on the CMS (Compact Muon Spectrometer) project to present the status of their works and to exchange ideas. 5 sessions have been organized: 1) CMS status and perspectives, 2) contributions of the different laboratories, 3) software and computation, 4) physics with CMS (particularly the search for the Higgs boson), and 5) electronic needs. This document gathers the slides of the presentations

  19. CMS Create #2 | 3-4 October | Register now!

    CERN Multimedia

    2016-01-01

    CMS Create brings together CERN members and students from IPAC Design Genève (see here). The goal is to build a prototype exhibit illustrating what CMS does and how it does it. The exhibit will introduce the world of a particle physics detector to the general public, and to younger visitors in particular.    CMS Create, hosted by IdeaSquare, was first held in November 2015. There were 4 highly diverse teams made of participants from many educational backgrounds and from 15 nationalities. 36% of these were women; a figure we hope will grow this year. The 25 participants were CMS physicists, computer scientists, engineers, other CMS collaborators and IPAC students. The 2015 winning exhibit is now permanently installed in the visitor reception centre at CMS Point 5, which was visited by 20.600 visitors during 2015. Are you creative and motivated to share your ideas?  Take part in CMS Create #2, meet with scientists and designers from all over the world and explain to CER...

  20. Evolution of the Atlas data and computing model for a Tier-2 in the EGI infrastructure

    CERN Document Server

    Fernandez, A; The ATLAS collaboration; AMOROS, G; VILLAPLANA, M; FASSI, F; KACI, M; LAMAS, A; OLIVER, E; SALT, J; SANCHEZ, J; SANCHEZ, V

    2012-01-01

    ABSTRAC ISCG 2012 Evolution of the Atlas data and computing model for a Tier2 in the EGI infrastructure During last years the Atlas computing model has moved from a more strict design, where every Tier2 had a liaison and a network dependence from a Tier1, to a more meshed approach where every cloud could be connected. Evolution of ATLAS data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. It also requires rethinking the network infrastructure to enable any Tier2 and associated Tier3 to easily connect to any Tier1 or Tier2. Tier2s are becoming more and more important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used more effic...

  1. Cleavage-Independent HIV-1 Trimers From CHO Cell Lines Elicit Robust Autologous Tier 2 Neutralizing Antibodies

    Directory of Open Access Journals (Sweden)

    Shridhar Bale

    2018-05-01

    Full Text Available Native flexibly linked (NFL HIV-1 envelope glycoprotein (Env trimers are cleavage-independent and display a native-like, well-folded conformation that preferentially displays broadly neutralizing determinants. The NFL platform simplifies large-scale production of Env by eliminating the need to co-transfect the precursor-cleaving protease, furin that is required by the cleavage-dependent SOSIP trimers. Here, we report the development of a CHO-M cell line that expressed BG505 NFL trimers at a high level of homogeneity and yields of ~1.8 g/l. BG505 NFL trimers purified by single-step lectin-affinity chromatography displayed a native-like closed structure, efficient recognition by trimer-preferring bNAbs, no recognition by non-neutralizing CD4 binding site-directed and V3-directed antibodies, long-term stability, and proper N-glycan processing. Following negative-selection, formulation in ISCOMATRIX adjuvant and inoculation into rabbits, the trimers rapidly elicited potent autologous tier 2 neutralizing antibodies. These antibodies targeted the N-glycan “hole” naturally present on the BG505 Env proximal to residues at positions 230, 241, and 289. The BG505 NFL trimers that did not expose V3 in vitro, elicited low-to-no tier 1 virus neutralization in vivo, indicating that they remained intact during the immunization process, not exposing V3. In addition, BG505 NFL and BG505 SOSIP trimers expressed from 293F cells, when formulated in Adjuplex adjuvant, elicited equivalent BG505 tier 2 autologous neutralizing titers. These titers were lower in potency when compared to the titers elicited by CHO-M cell derived trimers. In addition, increased neutralization of tier 1 viruses was detected. Taken together, these data indicate that both adjuvant and cell-type expression can affect the elicitation of tier 2 and tier 1 neutralizing responses in vivo.

  2. CRAB3: Establishing a new generation of services for distributed analysis at CMS

    International Nuclear Information System (INIS)

    Cinquilli, M; Spiga, D; Konstantinov, P; Mascheroni, M; Grandi, C; Hernàndez, J M; Riahi, H; Vaandering, E

    2012-01-01

    In CMS Computing the highest priorities for analysis tools are the improvement of the end users’ ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as data and simulation processing. This strategy foresees that all workload tools (TierO, Tier1, production, analysis) share a common core with long term maintainability as well as the standardization of the operator interfaces. The re-engineered analysis workload manager, called CRAB3, makes use of newer technologies, such as RESTFul based web services and NoSQL Databases, aiming to increase the scalability and reliability of the system. As opposed to CRAB2, in CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services serving the user tasks. The new architecture of CRAB substantially changes the deployment model and operations activities. In this paper we present the implementation of CRAB3, emphasizing how the new architecture improves the workflow automation and simplifies maintainability. In particular, we will highlight the impact of the new design on daily operations.

  3. CRAB3: Establishing a new generation of services for distributed analysis at CMS

    Science.gov (United States)

    Cinquilli, M.; Spiga, D.; Grandi, C.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Riahi, H.; Vaandering, E.

    2012-12-01

    In CMS Computing the highest priorities for analysis tools are the improvement of the end users’ ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as data and simulation processing. This strategy foresees that all workload tools (TierO, Tier1, production, analysis) share a common core with long term maintainability as well as the standardization of the operator interfaces. The re-engineered analysis workload manager, called CRAB3, makes use of newer technologies, such as RESTFul based web services and NoSQL Databases, aiming to increase the scalability and reliability of the system. As opposed to CRAB2, in CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services serving the user tasks. The new architecture of CRAB substantially changes the deployment model and operations activities. In this paper we present the implementation of CRAB3, emphasizing how the new architecture improves the workflow automation and simplifies maintainability. In particular, we will highlight the impact of the new design on daily operations.

  4. CRAB3: Establishing a new generation of services for distributed analysis at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Cinquilli, M. [CERN; Spiga, D. [CERN; Grandi, C. [INFN, Bologna; Hernandez, J. M. [Madrid, CIEMAT; Konstantinov, P. [CERN; Mascheroni, M. [CERN; Riahi, H. [INFN, Perugia; Vaandering, E. [Fermilab

    2012-01-01

    In CMS Computing the highest priorities for analysis tools are the improvement of the end users ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as data and simulation processing. This strategy foresees that all workload tools (TierO, Tier1, production, analysis) share a common core with long term maintainability as well as the standardization of the operator interfaces. The re-engineered analysis workload manager, called CRAB3, makes use of newer technologies, such as RESTFul based web services and NoSQL Databases, aiming to increase the scalability and reliability of the system. As opposed to CRAB2, in CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services serving the user tasks. The new architecture of CRAB substantially changes the deployment model and operations activities. In this paper we present the implementation of CRAB3, emphasizing how the new architecture improves the workflow automation and simplifies maintainability. In particular, we will highlight the impact of the new design on daily operations.

  5. An optimization of the ALICE XRootD storage cluster at the Tier-2 site in Czech Republic

    International Nuclear Information System (INIS)

    Adamova, D; Horky, J

    2012-01-01

    ALICE, as well as the other experiments at the CERN LHC, has been building a distributed data management infrastructure since 2002. Experience gained during years of operations with different types of storage managers deployed over this infrastructure has shown, that the most adequate storage solution for ALICE is the native XRootD manager developed within a CERN - SLAC collaboration. The XRootD storage clusters exhibit higher stability and availability in comparison with other storage solutions and demonstrate a number of other advantages, like support of high speed WAN data access or no need for maintaining complex databases. Two of the operational characteristics of XRootD data servers are a relatively high number of open sockets and a high Unix load. In this article, we would like to describe our experience with the tuning/optimization of machines hosting the XRootD servers, which are part of the ALICE storage cluster at the Tier-2 WLCG site in Prague, Czech Republic. The optimization procedure, in addition to boosting the read/write performance of the servers, also resulted in a reduction of the Unix load.

  6. Challenging Data Management in CMS Computing with Network-aware Systems

    CERN Document Server

    Bonacorsi, Daniele

    2013-01-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of �?��??Intelligent Network Services�?��?�, including also bandwidt...

  7. The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

    International Nuclear Information System (INIS)

    González de la Hoz, S

    2012-01-01

    Originally the ATLAS Computing and Data Distribution model assumed that the Tier-2s should keep on disk collectively at least one copy of all “active” AOD and DPD datasets. Evolution of ATLAS Computing and Data model requires changes in ATLAS Tier-2s policy for the data replication, dynamic data caching and remote data access. Tier-2 operations take place completely asynchronously with respect to data taking. Tier-2s do simulation and user analysis. Large-scale reprocessing jobs on real data are at first taking place mostly at Tier-1s but will progressively be shared with Tier-2s as well. The availability of disk space at Tier-2s is extremely important in the ATLAS Computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier-2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier-2s are going to be used more efficiently. In this way Tier-1s and Tier-2s are becoming more equivalent for the network and the hierarchy of Tier-1, 2 is less strict. This paper presents the usage of Tier-2s resources in different Grid activities, caching of data at Tier-2s, and their role in the analysis in the new ATLAS Computing and Data model.

  8. Challenging data and workload management in CMS Computing with network-aware systems

    Science.gov (United States)

    D, Bonacorsi; T, Wildish

    2014-06-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of Intelligent Network Services, including also bandwidth on demand concepts. In this paper, we will review the work done in CMS on this, and the next steps.

  9. Challenging data and workload management in CMS Computing with network-aware systems

    International Nuclear Information System (INIS)

    Bonacorsi D; Wildish T

    2014-01-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of Intelligent Network Services, including also bandwidth on demand concepts. In this paper, we will review the work done in CMS on this, and the next steps.

  10. Challenging data and workload management in CMS Computing with network-aware systems

    CERN Document Server

    Wildish, Anthony

    2014-01-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of "Intelligent Network Services", including also bandwidth on demand concepts. In this paper, we will ...

  11. Distributed Analysis in CMS

    CERN Document Server

    Fanfani, Alessandra; Sanches, Jose Afonso; Andreeva, Julia; Bagliesi, Giusepppe; Bauerdick, Lothar; Belforte, Stefano; Bittencourt Sampaio, Patricia; Bloom, Ken; Blumenfeld, Barry; Bonacorsi, Daniele; Brew, Chris; Calloni, Marco; Cesini, Daniele; Cinquilli, Mattia; Codispoti, Giuseppe; D'Hondt, Jorgen; Dong, Liang; Dongiovanni, Danilo; Donvito, Giacinto; Dykstra, David; Edelmann, Erik; Egeland, Ricky; Elmer, Peter; Eulisse, Giulio; Evans, Dave; Fanzago, Federica; Farina, Fabio; Feichtinger, Derek; Fisk, Ian; Flix, Josep; Grandi, Claudio; Guo, Yuyi; Happonen, Kalle; Hernandez, Jose M; Huang, Chih-Hao; Kang, Kejing; Karavakis, Edward; Kasemann, Matthias; Kavka, Carlos; Khan, Akram; Kim, Bockjoo; Klem, Jukka; Koivumaki, Jesper; Kress, Thomas; Kreuzer, Peter; Kurca, Tibor; Kuznetsov, Valentin; Lacaprara, Stefano; Lassila-Perini, Kati; Letts, James; Linden, Tomas; Lueking, Lee; Maes, Joris; Magini, Nicolo; Maier, Gerhild; McBride, Patricia; Metson, Simon; Miccio, Vincenzo; Padhi, Sanjay; Pi, Haifeng; Riahi, Hassen; Riley, Daniel; Rossman, Paul; Saiz, Pablo; Sartirana, Andrea; Sciaba, Andrea; Sekhri, Vijay; Spiga, Daniele; Tuura, Lassi; Vaandering, Eric; Vanelderen, Lukas; Van Mulders, Petra; Vedaee, Aresh; Villella, Ilaria; Wicklund, Eric; Wildish, Tony; Wissing, Christoph; Wurthwein, Frank

    2009-01-01

    The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, distributing them over many computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed sites as the primary location for physics analysis to support a wide community with thousands potential users. This represents an unprecedented experimental challenge in terms of the scale of distributed computing resources and number of user. An overview of the computing architecture, the software tools and the distributed infrastructure is reported. Summaries of the experience in establishing efficient and scalable operations to get prepared for CMS distributed analysis are presented, followed by the user experience in their current analysis activities.

  12. CMS Centre at CERN

    CERN Multimedia

    2007-01-01

    A new "CMS Centre" is being established on the CERN Meyrin site by the CMS collaboration. It will be a focal point for communications, where physicists will work together on data quality monitoring, detector calibration, offline analysis of physics events, and CMS computing operations. Construction of the CMS Centre begins in the historic Proton Synchrotron (PS) control room. The historic Proton Synchrotron (PS) control room, Opened by Niels Bohr in 1960, will be reused by CMS to built its control centre. TThe LHC@FNAL Centre, in operation at Fermilab in the US, will work very closely with the CMS Centre, as well as the CERN Control Centre. (Photo Fermilab)The historic Proton Synchrotron (PS) control room is about to start a new life. Opened by Niels Bohr in 1960, the room will be reused by CMS to built its control centre. When finished, it will resemble the CERN Contro...

  13. Registration of cytoplasmic male-sterile oilseed sunflower genetic stocks CMS GIG2 and CMS GIG2-RV, and fertility restoration lines RF GIG2-MAX 1631 and RF GIG2-MAX 1631-RV

    Science.gov (United States)

    Two cytoplasmic male-sterile (CMS) oilseed sunflower (Helianthus annuus L.) genetic stocks, CMS GIG2 (Reg. No. xxx, PI xxxx), and CMS GIG2-RV (Reg. No. xxx, PI xxxx), and corresponding fertility restoration lines RF GIG2-MAX 1631 (Reg. No. xxx, PI xxxx) and RF GIG2-MAX 1631-RV (Reg. No. xxx, PI xxx...

  14. A Step-by-Step Guide to Tier 2 Behavioral Progress Monitoring

    Science.gov (United States)

    Bruhn, Allison L.; McDaniel, Sara C.; Rila, Ashley; Estrapala, Sara

    2018-01-01

    Students who are at risk for or show low-intensity behavioral problems may need targeted, Tier 2 interventions. Often, Tier 2 problem-solving teams are charged with monitoring student responsiveness to intervention. This process may be difficult for those who are not trained in data collection and analysis procedures. To aid practitioners in these…

  15. Xrootd data access for LHC experiments at the INFN-CNAF Tier-1

    International Nuclear Information System (INIS)

    Gregori, Daniele; Prosperini, Andrea; Ricci, Pier Paolo; Sapunenko, Vladimir; Boccali, Tommaso; Noferini, Francesco; Vagnoni, Vincenzo

    2014-01-01

    The Mass Storage System installed at the INFN-CNAF Tier-1 is one of the biggest hierarchical storage facilities in Europe. It currently provides storage resources for about 12% of all LHC data, as well as for other experiments. The Grid Enabled Mass Storage System (GEMSS) is the current solution implemented at CNAF and it is based on a custom integration between a high performance parallel file system (General Parallel File System, GPFS) and a tape management system for long-term storage on magnetic media (Tivoli Storage Manager, TSM). Data access to Grid users is being granted since several years by the Storage Resource Manager (StoRM), an implementation of the standard SRM interface, widely adopted within the WLCG community. The evolving requirements from the LHC experiments and other users are leading to the adoption of more flexible methods for accessing the storage. These include the implementation of the so-called storage federations, i.e. geographically distributed federations allowing direct file access to the federated storage between sites. A specific integration between GEMSS and Xrootd has been developed at CNAF to match the requirements of the CMS experiment. This was already implemented for the ALICE use case, using ad-hoc Xrootd modifications. The new developments for CMS have been validated and are already available in the official Xrootd builds. This integration is currently in production and appropriate large scale tests have been made. In this paper we present the Xrootd solutions adopted for ALICE, CMS, ATLAS and LHCb to increase the availability and optimize the overall performance.

  16. Acute tier-1 and tier-2 effect assessment approaches in the EFSA Aquatic Guidance Diocument: are they sufficiently protective for insecticides?

    NARCIS (Netherlands)

    Wijngaarden, van R.P.A.; Maltby, L.; Brock, T.C.M.

    2015-01-01

    BACKGROUND The objective of this paper is to evaluate whether the acute tier-1 and tier-2 methods as proposed by the Aquatic Guidance Document recently published by the European Food Safety Authority (EFSA) are appropriate for deriving regulatory acceptable concentrations (RACs) for insecticides.

  17. Tier2 Submit Software

    Science.gov (United States)

    Download this tool for Windows or Mac, which helps facilities prepare a Tier II electronic chemical inventory report. The data can also be exported into the CAMEOfm (Computer-Aided Management of Emergency Operations) emergency planning software.

  18. Illustrative Example of Distributed Analysis in ATLAS Spanish Tier-2 and Tier-3 centers

    CERN Document Server

    Oliver, E; The ATLAS collaboration; González de la Hoz, S; Kaci, M; Lamas, A; Salt, J; Sánchez, J; Villaplana, M

    2011-01-01

    Data taking in ATLAS has been going on for more than one year. The necessity of a computing infrastructure for data storage, access for thousands of users and process of hundreds of million of events has been confirmed in this period. Fortunately, this task has been managed by the GRID infrastructure and the manpower that also has been developing specific GRID tools for the ATLAS community. An example of a physics analysis, searches for the decay of a heavy resonance into a ttbar pair, using this infrastructure is shown. Concretely using the ATLAS Spanish Tier-2 and the IFIC Tier-3. In this moment, the ATLAS Distributed Computing group is working to improve the connectivity among centers in order to be ready for the foreseen increase on the ATLAS activity in the next years.

  19. Distributed Analysis Experience using Ganga on an ATLAS Tier2 infrastructure

    International Nuclear Information System (INIS)

    Fassi, F.; Cabrera, S.; Vives, R.; Fernandez, A.; Gonzalez de la Hoz, S.; Sanchez, J.; March, L.; Salt, J.; Kaci, M.; Lamas, A.; Amoros, G.

    2007-01-01

    The ATLAS detector will explore the high-energy frontier of Particle Physics collecting the proton-proton collisions delivered by the LHC (Large Hadron Collider). Starting in spring 2008, the LHC will produce more than 10 Peta bytes of data per year. The adapted tiered hierarchy for computing model at the LHC is: Tier-0 (CERN), Tiers-1 and Tiers-2 centres distributed around the word. The ATLAS Distributed Analysis (DA) system has the goal of enabling physicists to perform Grid-based analysis on distributed data using distributed computing resources. IFIC Tier-2 facility is participating in several aspects of DA. In support of the ATLAS DA activities a prototype is being tested, deployed and integrated. The analysis data processing applications are based on the Athena framework. GANGA, developed by LHCb and ATLAS experiments, allows simple switching between testing on a local batch system and large-scale processing on the Grid, hiding Grid complexities. GANGA deals with providing physicists an integrated environment for job preparation, bookkeeping and archiving, job splitting and merging. The experience with the deployment, configuration and operation of the DA prototype will be presented. Experiences gained of using DA system and GANGA in the Top physics analysis will be described. (Author)

  20. Debugging Data Transfers in CMS

    CERN Document Server

    Bagliesi, G; Bloom, K; Bockelman, B; Bonacorsi, D; Fisk, I; Flix, J; Hernandez, J; D'Hondt, J; Kadastik, M; Klem, J; Kodolova, O; Kuo, C M; Letts, J; Maes, J; Magini, N; Metson, S; Piedra, J; Pukhaeva, N; Tuura, L; Sonajalg, S; Wu, Y; Van Mulders, P; Villella, I; Wurthwein, F

    2010-01-01

    The CMS experiment at CERN is preparing for LHC data taking in several computing preparation activities. In early 2007 a traffic load generator infrastructure for distributed data transfer tests called the LoadTest was designed and deployed to equip the WLCG sites that support CMS with a means for debugging, load-testing and commissioning data transfer routes among CMS computing centres. The LoadTest is based upon PhEDEx as a reliable, scalable data set replication system. The Debugging Data Transfers (DDT) task force was created to coordinate the debugging of the data transfer links. The task force aimed to commission most crucial transfer routes among CMS sites by designing and enforcing a clear procedure to debug problematic links. Such procedure aimed to move a link from a debugging phase in a separate and independent environment to a production environment when a set of agreed conditions are achieved for that link. The goal was to deliver one by one working transfer routes to the CMS data operations team...

  1. The Evolving role of Tier2s in ATLAS with the new Computing and Data Distribution Model

    CERN Document Server

    Gonzalez de la Hoz, S; The ATLAS collaboration

    2012-01-01

    Originally the ATLAS computing model assumed that the Tier2s of each of the 10 clouds should keep on disk collectively at least one copy of all "active" AOD and DPD datasets. Evolution of ATLAS computing and data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. Tier2 operations take place completely asynchronously with respect to data taking. Tier2s do simulation and user analysis. Large-scale reprocessing jobs on real data are at first taking place mostly at Tier1s but will progressively move to Tier2s as well. The availability of disk space at Tier2s is extremely important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used mo...

  2. The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

    CERN Document Server

    Gonzalez de la Hoz, S

    2012-01-01

    Originally the ATLAS computing model assumed that the Tier2s of each of the 10 clouds should keep on disk collectively at least one copy of all "active" AOD and DPD datasets. Evolution of ATLAS computing and data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. Tier2 operations take place completely asynchronously with respect to data taking. Tier2s do simulation and user analysis. Large-scale reprocessing jobs on real data are at first taking place mostly at Tier1s but will progressively move to Tier2s as well. The availability of disk space at Tier2s is extremely important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used mo...

  3. 42 CFR 405.874 - Appeals of CMS or a CMS contractor.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Appeals of CMS or a CMS contractor. 405.874 Section... Part B Program § 405.874 Appeals of CMS or a CMS contractor. A CMS contractor's (that is, a carrier... supplier enrollment application. If CMS or a CMS contractor denies a provider's or supplier's enrollment...

  4. CERN Open Days CMS Posters

    CERN Multimedia

    Davis, Siona Ruth

    2016-01-01

    Themes: 1) You are here (location P5, Cessy) 2) CERN 3) LHC 4) CMS Detector 5) Magnet 6) Subdetectors (Tracker, ECAL, HCAL, Muons) 7) Trigger and Data Acquisition 8) Collaboration 9) Site Geography 10) Construction 11) Lowering and Installation 12) Physics

  5. A Web portal for CMS Grid job submission and management

    Energy Technology Data Exchange (ETDEWEB)

    Braun, David [Department of Physics, Purdue University, W. Lafayette, IN 47907 (United States); Neumeister, Norbert, E-mail: neumeist@purdue.ed [Rosen Center for Advanced Computing, Purdue University, W. Lafayette, IN 47907 (United States)

    2010-04-01

    We present a Web portal for CMS Grid submission and management. The portal is built using a JBoss application server. It has a three tier architecture; presentation, business logic and data. Bean based business logic interacts with the underlying Grid infrastructure and pre-existing external applications, while the presentation layer uses AJAX to offer an intuitive, functional interface to the back-end. Application data aggregating information from the portal as well as the external applications is persisted to the server memory cache and then to a backend database. We describe how the portal exploits standard, off-the-shelf commodity software together with existing Grid infrastructures in order to facilitate job submission and monitoring for the CMS collaboration. This paper describes the design, development, current functionality and plans for future enhancements of the portal.

  6. A Web portal for CMS Grid job submission and management

    International Nuclear Information System (INIS)

    Braun, David; Neumeister, Norbert

    2010-01-01

    We present a Web portal for CMS Grid submission and management. The portal is built using a JBoss application server. It has a three tier architecture; presentation, business logic and data. Bean based business logic interacts with the underlying Grid infrastructure and pre-existing external applications, while the presentation layer uses AJAX to offer an intuitive, functional interface to the back-end. Application data aggregating information from the portal as well as the external applications is persisted to the server memory cache and then to a backend database. We describe how the portal exploits standard, off-the-shelf commodity software together with existing Grid infrastructures in order to facilitate job submission and monitoring for the CMS collaboration. This paper describes the design, development, current functionality and plans for future enhancements of the portal.

  7. Managing the CMS Data and Monte Carlo Processing during LHC Run 2

    Science.gov (United States)

    Wissing, C.; CMS Collaboration

    2017-10-01

    In order to cope with the challenges expected during the LHC Run 2 CMS put in a number of enhancements into the main software packages and the tools used for centrally managed processing. In the presentation we will highlight these improvements that allow CMS to deal with the increased trigger output rate, the increased pileup and the evolution in computing technology. The overall system aims at high flexibility, improved operational flexibility and largely automated procedures. The tight coupling of workflow classes to types of sites has been drastically relaxed. Reliable and high-performing networking between most of the computing sites and the successful deployment of a data-federation allow the execution of workflows using remote data access. That required the development of a largely automatized system to assign workflows and to handle necessary pre-staging of data. Another step towards flexibility has been the introduction of one large global HTCondor Pool for all types of processing workflows and analysis jobs. Besides classical Grid resources also some opportunistic resources as well as Cloud resources have been integrated into that Pool, which gives reach to more than 200k CPU cores.

  8. Proceedings of the second workshop of LHC Computing Grid, LCG-France

    International Nuclear Information System (INIS)

    Chollet, Frederique; Hernandez, Fabio; Malek, Fairouz; Gaelle, Shifrin

    2007-03-01

    The second LCG-France Workshop was held in Clermont-Ferrand on 14-15 March 2007. These sessions organized by IN2P3 and DAPNIA were attended by around 70 participants working with the Computing Grid of LHC in France. The workshop was a opportunity of exchanges of information between the French and foreign site representatives on one side and delegates of experiments on the other side. The event allowed enlightening the place of LHC Computing Task within the frame of W-LCG world project, the undergoing actions and the prospects in 2007 and beyond. The following communications were presented: 1. The current status of the LHC computation in France; 2.The LHC Grid infrastructure in France and associated resources; 3.Commissioning of Tier 1; 4.The sites of Tier-2s and Tier-3s; 5.Computing in ALICE experiment; 6.Computing in ATLAS experiment; 7.Computing in the CMS experiments; 8.Computing in the LHCb experiments; 9.Management and operation of computing grids; 10.'The VOs talk to sites'; 11.Peculiarities of ATLAS; 12.Peculiarities of CMS and ALICE; 13.Peculiarities of LHCb; 14.'The sites talk to VOs'; 15. Worldwide operation of Grid; 16.Following-up the Grid jobs; 17.Surveillance and managing the failures; 18. Job scheduling and tuning; 19.Managing the site infrastructure; 20.LCG-France communications; 21.Managing the Grid data; 22.Pointing the net infrastructure and site storage. 23.ALICE bulk transfers; 24.ATLAS bulk transfers; 25.CMS bulk transfers; 26. LHCb bulk transfers; 27.Access to LHCb data; 28.Access to CMS data; 29.Access to ATLAS data; 30.Access to ALICE data; 31.Data analysis centers; 32.D0 Analysis Farm; 33.Some CMS grid analyses; 34.PROOF; 35.Distributed analysis using GANGA; 36.T2 set-up for end-users. In their concluding remarks Fairouz Malek and Dominique Pallin stressed that the current workshop was more close to users while the tasks for tightening the links between the sites and the experiments were definitely achieved. The IN2P3 leadership expressed

  9. CMS computing model evolution

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Colling, D; Fisk, I; Girone, M

    2014-01-01

    The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.

  10. CMS inaugurates its high-tech visitor centre

    CERN Multimedia

    Antonella Del Rosso

    2014-01-01

    The new Building SL53 on CERN’s Cessy site in France is ready to welcome the thousands of visitors (30,000 in 2013) who come to learn about CMS each year. It boasts low energy consumption and the possibility, in the future, of being heated by recycling the heat given off by the detector.   The new Building SL53 at CERN’s Cessy site in France will be inaugurated on 24 May 2014. “Constructed by the GS Department and the firm Dimensione, the building meets the operational requirements of the CMS experiment, which require the uninterrupted use of its infrastructure,” explains Martin Gastal, the member of the collaboration in charge of the project. Its 560 m2 surface area features a meeting room, eight offices, an open space for CMS users, a rest area with a kitchen, sanitary facilities including showers, and a conference room in which to receive visitors. “The new conference room on the ground floor can accommodate 50 people,&am...

  11. A retrospective tiered environmental assessment of the Mount Storm Wind Energy Facility, West Virginia,USA

    Energy Technology Data Exchange (ETDEWEB)

    Efroymson, Rebecca Ann [ORNL; Day, Robin [No Affiliation; Strickland, M. Dale [Western EcoSystems Technology

    2012-11-01

    Bird and bat fatalities from wind energy projects are an environmental and public concern, with post-construction fatalities sometimes differing from predictions. Siting facilities in this context can be a challenge. In March 2012 the U.S. Fish and Wildlife Service (USFWS) released Land-based Wind Energy Guidelines to assess collision fatalities and other potential impacts to species of concern and their habitats to aid in siting and management. The Guidelines recommend a tiered approach for assessing risk to wildlife, including a preliminary site evaluation that may evaluate alternative sites, a site characterization, field studies to document wildlife and habitat and to predict project impacts, post construction studies to estimate impacts, and other post construction studies. We applied the tiered assessment framework to a case study site, the Mount Storm Wind Energy Facility in Grant County, West Virginia, USA, to demonstrate the use of the USFWS assessment approach, to indicate how the use of a tiered assessment framework might have altered outputs of wildlife assessments previously undertaken for the case study site, and to assess benefits of a tiered ecological assessment framework for siting wind energy facilities. The conclusions of this tiered assessment for birds are similar to those of previous environmental assessments for Mount Storm. This assessment found risk to individual migratory tree-roosting bats that was not emphasized in previous preconstruction assessments. Differences compared to previous environmental assessments are more related to knowledge accrued in the past 10 years rather than to the tiered structure of the Guidelines. Benefits of the tiered assessment framework include good communication among stakeholders, clear decision points, a standard assessment trajectory, narrowing the list of species of concern, improving study protocols, promoting consideration of population-level effects, promoting adaptive management through post

  12. Three-tier rough superhydrophobic surfaces

    International Nuclear Information System (INIS)

    Cao, Yuanzhi; Yuan, Longyan; Hu, Bin; Zhou, Jun

    2015-01-01

    A three-tier rough superhydrophobic surface was fabricated by growing hydrophobic modified (fluorinated silane) zinc oxide (ZnO)/copper oxide (CuO) hetero-hierarchical structures on silicon (Si) micro-pillar arrays. Compared with the other three control samples with a less rough tier, the three-tier surface exhibits the best water repellency with the largest contact angle 161° and the lowest sliding angle 0.5°. It also shows a robust Cassie state which enables the water to flow with a speed over 2 m s"−"1. In addition, it could prevent itself from being wetted by the droplet with low surface tension (mixed water and ethanol 1:1 in volume) which reveals a flow speed of 0.6 m s"−"1 (dropped from the height of 2 cm). All these features prove that adding another rough tier on a two-tier rough surface could futher improve its water-repellent properties. (paper)

  13. One-tiered vs. two-tiered forecasting of South African seasonal rainfall

    CSIR Research Space (South Africa)

    Landman, WA

    2010-09-01

    Full Text Available -tiered Forecasting of South African Seasonal Rainfall Willem A. Landman1, Dave DeWitt2 and Daleen L?tter3 1: Council for Scientific and Industrial Research; WALandman@csir.co.za 2: International Research Institute for Climate and Society; Daved... modelled as fully interacting is called a fully coupled model system. Forecast performance by such systems predicting seasonal rainfall totals over South Africa is compared with forecasts produced by a computationally less demanding two-tiered system...

  14. A new visitor centre for CMS

    CERN Document Server

    2001-01-01

    At the inauguration of the new CMS visitor centre. The CMS experiment inaugurated a new visitor centre at its Cessy site on 14 June. This will allow the thousands of people who come to CERN each year to follow the construction of one the Laboratory's flagship experiments first-hand. CERN receives over 20,000 visitors each year. Until recently, many of them were taken on a guided tour of one of the LEP experiments. With the closure of LEP, however, trips underground are no longer possible, and the Visits' Service has put in place a number of other itineraries (Bulletin 46/2000). Since the CMS detector will be almost entirely constructed in a surface hall, it is now taking a big share of the limelight. The CMS visitor centre has been built on a platform overlooking CMS construction. It contains a set of clear descriptive posters describing the experiment, along with a video projection showing animations and movies about CMS construction. In the coming weeks, a display of CMS detector elements will be added, as...

  15. Examining the Efficacy of a Tier 2 Kindergarten Mathematics Intervention.

    Science.gov (United States)

    Clarke, Ben; Doabler, Christian T; Smolkowski, Keith; Baker, Scott K; Fien, Hank; Strand Cary, Mari

    2016-01-01

    This study examined the efficacy of a Tier 2 kindergarten mathematics intervention program, ROOTS, focused on developing whole number understanding for students at risk in mathematics. A total of 29 classrooms were randomly assigned to treatment (ROOTS) or control (standard district practices) conditions. Measures of mathematics achievement were collected at pretest and posttest. Treatment and control students did not differ on mathematics assessments at pretest. Gain scores of at-risk intervention students were significantly greater than those of control peers, and the gains of at-risk treatment students were greater than the gains of peers not at risk, effectively reducing the achievement gap. Implications for Tier 2 mathematics instruction in a response to intervention (RtI) model are discussed. © Hammill Institute on Disabilities 2014.

  16. CMS Software and Computing Ready for Run 2

    CERN Document Server

    Bloom, Kenneth

    2015-01-01

    In Run 1 of the Large Hadron Collider, software and computing was a strategic strength of the Compact Muon Solenoid experiment. The timely processing of data and simulation samples and the excellent performance of the reconstruction algorithms played an important role in the preparation of the full suite of searches used for the observation of the Higgs boson in 2012. In Run 2, the LHC will run at higher intensities and CMS will record data at a higher trigger rate. These new running conditions will provide new challenges for the software and computing systems. Over the two years of Long Shutdown 1, CMS has built upon the successes of Run 1 to improve the software and computing to meet these challenges. In this presentation we will describe the new features in software and computing that will once again put CMS in a position of physics leadership.

  17. A tiered analytical protocol for the characterization of heavy oil residues at petroleum-contaminated hazardous waste sites

    International Nuclear Information System (INIS)

    Pollard, S.J.T.; Kenefick, S.L.; Hrudey, S.E.; Fuhr, B.J.; Holloway, L.R.; Rawluk, M.

    1994-01-01

    The analysis of hydrocarbon-contaminated soils from abandoned refinery sites in Alberta, Canada is used to illustrate a tiered analytical approach to the characterization of complex hydrocarbon wastes. Soil extracts isolated from heavy oil- and creosote-contaminated sites were characterized by thin layer chromatography with flame ionization detection (TLC-FID), ultraviolet fluorescence, simulated distillation (GC-SIMDIS) and chemical ionization GC-MS analysis. The combined screening and detailed analytical methods provided information essential to remedial technology selection including the extent of contamination, the class composition of soil extracts, the distillation profile of component classes and the distribution of individual class components within various waste fractions. Residual contamination was characteristic of heavy, degraded oils, consistent with documented site operations and length of hydrocarbon exposure at the soil surface

  18. Using Xrootd to Federate Regional Storage

    International Nuclear Information System (INIS)

    Bauerdick, L; Benjamin, D; Bloom, K; Bockelman, B; Bradley, D; Dasu, S; Ernst, M; Ito, H; Rind, O; Gardner, R; Vukotic, I; Hanushevsky, A; Lesny, D; McGuigan, P; McKee, S; Severini, H; Sfiligoi, I; Tadel, M; Würthwein, F; Williams, S

    2012-01-01

    While the LHC data movement systems have demonstrated the ability to move data at the necessary throughput, we have identified two weaknesses: the latency for physicists to access data and the complexity of the tools involved. To address these, both ATLAS and CMS have begun to federate regional storage systems using Xrootd. Xrootd, referring to a protocol and implementation, allows us to provide data access to all disk-resident data from a single virtual endpoint. This “redirector” discovers the actual location of the data and redirects the client to the appropriate site. The approach is particularly advantageous since typically the redirection requires much less than 500 milliseconds and the Xrootd client is conveniently built into LHC physicists’ analysis tools. Currently, there are three regional storage federations - a US ATLAS region, a European CMS region, and a US CMS region. The US ATLAS and US CMS regions include their respective Tier 1, Tier 2 and some Tier 3 facilities; a large percentage of experimental data is available via the federation. Additionally, US ATLAS has begun studying low-latency regional federations of close-by sites. From the base idea of federating storage behind an endpoint, the implementations and use cases diverge. The CMS software framework is capable of efficiently processing data over high-latency links, so using the remote site directly is comparable to accessing local data. The ATLAS processing model allows a broad spectrum of user applications with varying degrees of performance with regard to latency; a particular focus has been optimizing n-tuple analysis. Both VOs use GSI security. ATLAS has developed a mapping of VOMS roles to specific file system authorizations, while CMS has developed callouts to the site's mapping service. Each federation presents a global namespace to users. For ATLAS, the global-to-local mapping is based on a heuristic-based lookup from the site's local file catalog, while CMS does the mapping

  19. Using Xrootd to federate regional storage

    Energy Technology Data Exchange (ETDEWEB)

    Bauerdick, L.; et al.

    2012-01-01

    While the LHC data movement systems have demonstrated the ability to move data at the necessary throughput, we have identified two weaknesses: the latency for physicists to access data and the complexity of the tools involved. To address these, both ATLAS and CMS have begun to federate regional storage systems using Xrootd. Xrootd, referring to a protocol and implementation, allows us to provide data access to all disk-resident data from a single virtual endpoint. This redirector discovers the actual location of the data and redirects the client to the appropriate site. The approach is particularly advantageous since typically the redirection requires much less than 500 milliseconds and the Xrootd client is conveniently built into LHC physicists analysis tools. Currently, there are three regional storage federations - a US ATLAS region, a European CMS region, and a US CMS region. The US ATLAS and US CMS regions include their respective Tier 1, Tier 2 and some Tier 3 facilities, a large percentage of experimental data is available via the federation. Additionally, US ATLAS has begun studying low-latency regional federations of close-by sites. From the base idea of federating storage behind an endpoint, the implementations and use cases diverge. The CMS software framework is capable of efficiently processing data over high-latency links, so using the remote site directly is comparable to accessing local data. The ATLAS processing model allows a broad spectrum of user applications with varying degrees of performance with regard to latency, a particular focus has been optimizing n-tuple analysis. Both VOs use GSI security. ATLAS has developed a mapping of VOMS roles to specific file system authorizations, while CMS has developed callouts to the site's mapping service. Each federation presents a global namespace to users. For ATLAS, the global-to-local mapping is based on a heuristic-based lookup from the site's local file catalog, while CMS does the mapping

  20. Regulatory Compliance in Multi-Tier Supplier Networks

    Science.gov (United States)

    Goossen, Emray R.; Buster, Duke A.

    2014-01-01

    Over the years, avionics systems have increased in complexity to the point where 1st tier suppliers to an aircraft OEM find it financially beneficial to outsource designs of subsystems to 2nd tier and at times to 3rd tier suppliers. Combined with challenging schedule and budgetary pressures, the environment in which safety-critical systems are being developed introduces new hurdles for regulatory agencies and industry. This new environment of both complex systems and tiered development has raised concerns in the ability of the designers to ensure safety considerations are fully addressed throughout the tier levels. This has also raised questions about the sufficiency of current regulatory guidance to ensure: proper flow down of safety awareness, avionics application understanding at the lower tiers, OEM and 1st tier oversight practices, and capabilities of lower tier suppliers. Therefore, NASA established a research project to address Regulatory Compliance in a Multi-tier Supplier Network. This research was divided into three major study efforts: 1. Describe Modern Multi-tier Avionics Development 2. Identify Current Issues in Achieving Safety and Regulatory Compliance 3. Short-term/Long-term Recommendations Toward Higher Assurance Confidence This report presents our findings of the risks, weaknesses, and our recommendations. It also includes a collection of industry-identified risks, an assessment of guideline weaknesses related to multi-tier development of complex avionics systems, and a postulation of potential modifications to guidelines to close the identified risks and weaknesses.

  1. Time horizon for AFV emission savings under Tier 2

    International Nuclear Information System (INIS)

    Saricks, C. L.

    2000-01-01

    Implementation of the Federal Tier 2 vehicular emission standards according to the schedule presented in the December, 1999 Final Rule will result in substantial reductions of NMHC, CO, NO x , and fine particle emissions from motor vehicles. Currently, when compared to Tier 1 and even NLEV certification requirements, the emissions performance of automobiles and light-duty trucks powered by non-petroleum (especially, gaseous) fuels (i.e., vehicles collectively termed AFVs) enjoy measurable advantage over their gasoline- and diesel-fueled counterparts over the full Federal Test Procedure and, especially, in Bag 1 (cold start). For the lighter end of these vehicle classes, this advantage may disappear shortly after 2004 under the new standards, but should continue for a longer period (perhaps beyond 2008) for the heavier end as well as for heavy-duty vehicles relative to diesel-fueled counterparts. Because of the continuing commitment of the U.S. Department of Energy's Clean Cities coalitions to the acquisition and operation of AFVs of many types and size classes, it is important for them to know in which classes their acquisitions will remain clear relative to the petroleum-fueled counterparts they might otherwise procure. This paper provides an approximate timeline for and expected magnitude of such savings, assuming that full implementation of the Tier 2 standards covering both vehicular emissions and fuel sulfur limits proceeds on schedule. The pollutants of interest are primary ozone precursors and fine particulate matter from fuel combustion

  2. CDX2 prognostic value in stage II/III resected colon cancer is related to CMS classification.

    Science.gov (United States)

    Pilati, C; Taieb, J; Balogoun, R; Marisa, L; de Reyniès, A; Laurent-Puig, P

    2017-05-01

    Caudal-type homeobox transcription factor 2 (CDX2) is involved in colon cancer (CC) oncogenesis and has been proposed as a prognostic biomarker in patients with stage II or III CC. We analyzed CDX2 expression in a series of 469 CC typed for the new international consensus molecular subtype (CMS) classification, and we confirmed results in a series of 90 CC. Here, we show that lack of CDX2 expression is only present in the mesenchymal subgroup (CMS4) and in MSI-immune tumors (CMS1) and not in CMS2 and CMS3 colon cancer. Although CDX2 expression was a globally independent prognostic factor, loss of CDX2 expression is not associated with a worse prognosis in the CMS1 group, but is highly prognostic in CMS4 patients for both relapse free and overall survival. Similarly, lack of CDX2 expression was a bad prognostic factor in MSS patients, but not in MSI. Our work suggests that combination of the consensual CMS classification and lack of CDX2 expression could be a useful marker to identify CMS4/CDX2-negative patients with a very poor prognosis. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  3. Large scale and low latency analysis facilities for the CMS experiment: development and operational aspects

    CERN Document Server

    Riahi, Hassen

    2010-01-01

    While a majority of CMS data analysis activities rely on the distributed computing infrastructure on the WLCG Grid, dedicated local computing facilities have been deployed to address particular requirements in terms of latency and scale. The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workfows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast turnaround. In order to reach the goal for fast turnaround tasks, the Workload Management group has designed a CRABServer based system to fit with two main needs: to provide a simple, familiar interface to the user (as used in the CRAB Analysis Tool[7]) and to allow an easy transition to the Tier-0 system. While the CRABServer component had been initially designed for Grid analysis by CMS end-users, with a few modifications it turned out to be also a very powerful service to manage and monitor local submissions on the CAF. Tran...

  4. Components of the CMS magnet system at the detector's assembly site.

    CERN Multimedia

    Maximilien Brice

    2002-01-01

    Photos 01, 05: Outer cylinder of the CMS vacuum tank. The vacuum tank consists of inner and outer stainless-steel cylinders and houses the superconducting coil. As can be seen, the cylinder is attached to the innermost ring of the barrel yoke. Photos 02, 04: CMS end-cap yoke. The magnetic flux generated by the superconducting coil in the CMS detector is returned via an iron yoke comprising three end-cap discs at each end (end-cap yoke) and five concentric cylinders (barrel yoke).Photo 03: Inner cylinder of the CMS vacuum tank. The vacuum tank consists of inner and outer stainless-steel cylinders and houses the superconducting coil. The inner cylinder contains all the barrel sub-detectors, which it supports via a system of horizontal rails. The cylinder is pictured here in the vertical position on a yellow platform mounted on the ferris-wheel support structure. This will allow it to be pivoted and inserted into the outer cylinder already attached to the innermost ring of the barrel yoke.

  5. Spectacular test of the fire extinguishing system in the underground cavern of the CMS experiment

    CERN Multimedia

    Maximilien Brice

    2006-01-01

    The enormous rumbling heard 100 m under the earth on Friday, 12 May, was not the start of a foam party at CMS. The Safety Team looked on from the second tier of the CMS underground cavern as it reechoed to the sound of water rushing through the two huge pipes overhead and the air was filled with a mixture of water and foam. A minute later it was a winter wonderland, as fluffy puffs of foam came shooting out of the twelve foam blowers lining the upper cavern walls on both sides. In less than two minutes 7 m3 of water mixed with a small percentage of foaming liquid, was transformed into 5600 m3 of foam and discharged into the cavern.

  6. Transporting Motivational Interviewing to School Settings to Improve the Engagement and Fidelity of Tier 2 Interventions

    Science.gov (United States)

    Frey, Andy J.; Lee, Jon; Small, Jason W.; Seeley, John R.; Walker, Hill M.; Feil, Edward G.

    2013-01-01

    The majority of Tier 2 interventions are facilitated by specialized instructional support personnel, such as a school psychologists, school social workers, school counselors, or behavior consultants. Many professionals struggle to involve parents and teachers in Tier 2 behavior interventions. However, attention to the motivational issues for…

  7. An Xrootd Italian Federation

    International Nuclear Information System (INIS)

    Boccali, T; Mazzoni, E; Donvito, G; Diacono, D; Marzulli, G; Pompili, A; Ricca, G Della; Argiro, S; Gregori, D; Grandi, C; Bonacorsi, D; Lista, L; Fabozzi, F; Barone, L M; Santocchia, A; Riahi, H; Tricomi, A; Sgaravatto, M; Maron, G

    2014-01-01

    The Italian community in CMS has built a geographically distributed network in which all the data stored in the Italian region are available to all the users for their everyday work. This activity involves at different level all the CMS centers: the Tier1 at CNAF, all the four Tier2s (Bari, Rome, Legnaro and Pisa), and few Tier3s (Trieste, Perugia, Torino, Catania, Napoli, ...). The federation uses the new network connections as provided by GARR, our NREN (National Research and Education Network), which provides a minimum of 10 Gbit/s to all the sites via the GARR-X[2] project. The federation is currently based on Xrootd[1] technology, and on a Redirector aimed to seamlessly connect all the sites, giving the logical view of a single entity. A special configuration has been put in place for the Tier1, CNAF, where ad-hoc Xrootd changes have been implemented in order to protect the tape system from excessive stress, by not allowing WAN connections to access tape only files, on a file-by-file basis. In order to improve the overall performance while reading files, both in terms of bandwidth and latency, a hierarchy of xrootd redirectors has been implemented. The solution implemented provides a dedicated Redirector where all the INFN sites are registered, without considering their status (T1, T2, or T3 sites). An interesting use case were able to cover via the federation are disk-less Tier3s. The caching solution allows to operate a local storage with minimal human intervention: transfers are automatically done on a single file basis, and the cache is maintained operational by automatic removal of old files.

  8. An Xrootd Italian Federation

    Science.gov (United States)

    Boccali, T.; Donvito, G.; Diacono, D.; Marzulli, G.; Pompili, A.; Della Ricca, G.; Mazzoni, E.; Argiro, S.; Gregori, D.; Grandi, C.; Bonacorsi, D.; Lista, L.; Fabozzi, F.; Barone, L. M.; Santocchia, A.; Riahi, H.; Tricomi, A.; Sgaravatto, M.; Maron, G.

    2014-06-01

    The Italian community in CMS has built a geographically distributed network in which all the data stored in the Italian region are available to all the users for their everyday work. This activity involves at different level all the CMS centers: the Tier1 at CNAF, all the four Tier2s (Bari, Rome, Legnaro and Pisa), and few Tier3s (Trieste, Perugia, Torino, Catania, Napoli, ...). The federation uses the new network connections as provided by GARR, our NREN (National Research and Education Network), which provides a minimum of 10 Gbit/s to all the sites via the GARR-X[2] project. The federation is currently based on Xrootd[1] technology, and on a Redirector aimed to seamlessly connect all the sites, giving the logical view of a single entity. A special configuration has been put in place for the Tier1, CNAF, where ad-hoc Xrootd changes have been implemented in order to protect the tape system from excessive stress, by not allowing WAN connections to access tape only files, on a file-by-file basis. In order to improve the overall performance while reading files, both in terms of bandwidth and latency, a hierarchy of xrootd redirectors has been implemented. The solution implemented provides a dedicated Redirector where all the INFN sites are registered, without considering their status (T1, T2, or T3 sites). An interesting use case were able to cover via the federation are disk-less Tier3s. The caching solution allows to operate a local storage with minimal human intervention: transfers are automatically done on a single file basis, and the cache is maintained operational by automatic removal of old files.

  9. Predicting dataset popularity for the CMS experiment

    CERN Document Server

    INSPIRE-00005122; Li, Ting; Giommi, Luca; Bonacorsi, Daniele; Wildish, Tony

    2016-01-01

    The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and making discoveries. Even though computing plays a significant role in physics analysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site utilization can be really useful for improving the throughput of the system and its management. In this paper, we discuss a first CMS analysis of dataset popularity based on CMS meta-data which can be used as a model for dynamic data placement and provide the foundation of data-driven approach for the CMS computing infrastructure.

  10. Predicting dataset popularity for the CMS experiment

    International Nuclear Information System (INIS)

    Kuznetsov, V.; Li, T.; Giommi, L.; Bonacorsi, D.; Wildish, T.

    2016-01-01

    The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and making discoveries. Even though computing plays a significant role in physics analysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site utilization can be really useful for improving the throughput of the system and its management. In this paper, we discuss a first CMS analysis of dataset popularity based on CMS meta-data which can be used as a model for dynamic data placement and provide the foundation of data-driven approach for the CMS computing infrastructure. (paper)

  11. Tau lepton trigger and identification at CMS in Run-2

    CERN Document Server

    Davignon, Olivier

    2016-01-01

    In the context of LHC Run-2, the Compact Muon Solenoid (CMS) detector was upgraded. In particular, the CMS trigger system and particle reconstruction were improved. The CMS experiment implements a sophisticated trigger system composed of a Level-1 trigger, instrumented by custom-designed hardware boards, and software layers called High-Level-Triggers (HLT). A new Level-1 trigger architecture with improved performance has been installed and is now used to maintain the thresholds used in LHC Run-1 in the more challenging conditions experienced during Run-2. Optimized software selection techniques have also been developed at the HLT. The hadronic $\\tau$ reconstruction algorithm has been modified to better account for the $\\pi^0$(s) from $\\tau$ decays. In addition, improvements to discriminators against QCD-induced jets and electrons were also developed. The results of these improvements are presented and the validation of the $\\tau$ identification performance is shown.

  12. Analysis facility infrastructure (Tier-3) for ATLAS experiment

    CERN Document Server

    González de la Hoza, S; Ros, E; Sánchez, J; Amorós, G; Fassi, F; Fernández, A; Kaci, M; Lamas, A; Salt, J

    2008-01-01

    In the ATLAS computing model the tiered hierarchy ranged from the Tier-0 (CERN) down to desktops or workstations (Tier-3). The focus on defining the roles of each tiered component has evolved with the initial emphasis on the Tier-0 and Tier-1 definition and roles. The various LHC (Large Hadron Collider) projects, including ATLAS, then evolved the tiered hierarchy to include Tier-2’s (Regional centers) as part of their projects. Tier-3 centres, on the other hand, have been defined as whatever an institution could construct to support their Physics goals using institutional and otherwise leveraged resources and therefore have not been considered to be part of the official ATLAS computing resources. However, Tier-3 centres are going to exist and will have implications on how the computing model should support ATLAS physicists. Tier-3 users will want to access LHC data and simulations and will want to enable their resources to support their analysis and simulation work. This document will define how IFIC (Insti...

  13. Managing a tier-2 computer centre with a private cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-01-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI

  14. Tier identification (TID) for tiered memory characteristics

    Science.gov (United States)

    Chang, Jichuan; Lim, Kevin T; Ranganathan, Parthasarathy

    2014-03-25

    A tier identification (TID) is to indicate a characteristic of a memory region associated with a virtual address in a tiered memory system. A thread may be serviced according to a first path based on the TID indicating a first characteristic. The thread may be serviced according to a second path based on the TID indicating a second characteristic.

  15. Russian and Belorussian firms receive CMS Gold Awards

    CERN Multimedia

    2003-01-01

    On 7 March, CMS handed out its three latest Gold Awards in recognition of outstanding supplier performance. The directors of two Russian firms (ENTEK and the Myasishchev Design Bureau) and of the Belorussian company MZOR received their awards on the occasion of a visit by dignitaries from the two countries. The directors and dignitaries are pictured here with leaders of the CMS Collaboration in front of the CMS hadron calorimeter end-cap at the detector's assembly site.

  16. CMS MANANGEMENT MEETINGS

    CERN Multimedia

    Management Board Agendas and minutes of meetings of the Management Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223 Collaboration Board Agendas and minutes of meetings of the Collaboration Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174 LHCC: Feedback from the CMS Referees, LHCC 97 February 25, 2009. The CMS LHCC referees met with representatives of CMS on 17-2-09, to review progress since the last November minireview. The main topics included shutdown construction, maintenance and repairs; status of the preshower detector; commissioning and physics analysis results from cosmic ray running and CSA08; preparations for physics, off line analysis, computing, and data distribution. TOTEM management and the TOTEM referees then joined us for a joint session to examine the readiness of the TOTEM detector. Detector construction, maintenance, and repairs. The referees congratulate CMS Management and the Detector Groups for the...

  17. Using the GlideinWMS System as a Common Resource Provisioning Layer in CMS

    Energy Technology Data Exchange (ETDEWEB)

    Balcas, J. [Vilnius U.; Belforte, S. [Trieste U.; Bockelman, B. [Nebraska U.; Colling, D. [Imperial Coll., London; Gutsche, O. [Fermilab; Hufnagel, D. [Fermilab; Khan, F. [Quaid-i-Azam U.; Larson, K. [Fermilab; Letts, J. [UC, San Diego; Mascheroni, M. [Milan Bicocca U.; Mason, D. [Fermilab; McCrea, A. [UC, San Diego; Piperov, S. [Brown U.; Saiz-Santos, M. [UC, San Diego; Sfiligoi, I. [UC, San Diego; Tanasijczuk, A. [UC, San Diego; Wissing, C. [DESY

    2015-12-23

    CMS will require access to more than 125k processor cores for the beginning of Run 2 in 2015 to carry out its ambitious physics program with more and higher complexity events. During Run1 these resources were predominantly provided by a mix of grid sites and local batch resources. During the long shut down cloud infrastructures, diverse opportunistic resources and HPC supercomputing centers were made available to CMS, which further complicated the operations of the submission infrastructure. In this presentation we will discuss the CMS effort to adopt and deploy the glideinWMS system as a common resource provisioning layer to grid, cloud, local batch, and opportunistic resources and sites. We will address the challenges associated with integrating the various types of resources, the efficiency gains and simplifications associated with using a common resource provisioning layer, and discuss the solutions found. We will finish with an outlook of future plans for how CMS is moving forward on resource provisioning for more heterogenous architectures and services.

  18. ATLAS Tier-3 within IFIC-Valencia analysis facility

    CERN Document Server

    Villaplana, M; The ATLAS collaboration; Fernández, A; Salt, J; Lamas, A; Fassi, F; Kaci, M; Oliver, E; Sánchez, J; Sánchez-Martínez, V

    2012-01-01

    The ATLAS Tier-3 at IFIC-Valencia is attached to a Tier-2 that has 50% of the Spanish Federated Tier-2 resources. In its design, the Tier-3 includes a GRID-aware part that shares some of the features of IFIC Tier-2 such as using Lustre as a file system. ATLAS users, 70% of IFIC users, also have the possibility of analysing data with a PROOF farm and storing them locally. In this contribution we discuss the design of the analysis facility as well as the monitoring tools we use to control and improve its performance. We also comment on how the recent changes in the ATLAS computing GRID model affect IFIC. Finally, how this complex system can coexist with the other scientific applications running at IFIC (non-ATLAS users) is presented.

  19. CMS software and computing for LHC Run 2

    CERN Document Server

    INSPIRE-00067576

    2016-11-09

    The CMS offline software and computing system has successfully met the challenge of LHC Run 2. In this presentation, we will discuss how the entire system was improved in anticipation of increased trigger output rate, increased rate of pileup interactions and the evolution of computing technology. The primary goals behind these changes was to increase the flexibility of computing facilities where ever possible, as to increase our operational efficiency, and to decrease the computing resources needed to accomplish the primary offline computing workflows. These changes have resulted in a new approach to distributed computing in CMS for Run 2 and for the future as the LHC luminosity should continue to increase. We will discuss changes and plans to our data federation, which was one of the key changes towards a more flexible computing model for Run 2. Our software framework and algorithms also underwent significant changes. We will summarize the our experience with a new multi-threaded framework as deployed on ou...

  20. The CMS experiment inaugurated a new visitor centre at its Cessy site on 14 June

    CERN Multimedia

    2001-01-01

    The CMS visitor centre has been built on a platform overlooking CMS construction. It contains a set of clear descriptive posters describing the experiment, along with a video projection showing animations and movies about CMS construction.

  1. Analysis of internal network requirements for the distributed Nordic Tier-1

    DEFF Research Database (Denmark)

    Behrmann, G.; Fischer, L.; Gamst, Mette

    2010-01-01

    The Tier-1 facility operated by the Nordic DataGrid Facility (NDGF) differs significantly from other Tier-1s in several aspects: It is not located at one or a few locations but instead distributed throughout the Nordic, it is not under the governance of a single organisation but but is instead...... build from resources under the control of a number of different national organisations. Being physically distributed makes the design and implementation of the networking infrastructure a challenge. NDGF has its own internal OPN connecting the sites participating in the distributed Tier-1. To assess...

  2. 2-tiered antibody testing for early and late Lyme disease using only an immunoglobulin G blot with the addition of a VlsE band as the second-tier test.

    Science.gov (United States)

    Branda, John A; Aguero-Rosenfeld, Maria E; Ferraro, Mary Jane; Johnson, Barbara J B; Wormser, Gary P; Steere, Allen C

    2010-01-01

    Standard 2-tiered immunoglobulin G (IgG) testing has performed well in late Lyme disease (LD), but IgM testing early in the illness has been problematic. IgG VlsE antibody testing, by itself, improves early sensitivity, but may lower specificity. We studied whether elements of the 2 approaches could be combined to produce a second-tier IgG blot that performs well throughout the infection. Separate serum sets from LD patients and control subjects were tested independently at 2 medical centers using whole-cell enzyme immunoassays and IgM and IgG immunoblots, with recombinant VlsE added to the IgG blots. The results from both centers were combined, and a new second-tier IgG algorithm was developed. With standard 2-tiered IgM and IgG testing, 31% of patients with active erythema migrans (stage 1), 63% of those with acute neuroborreliosis or carditis (stage 2), and 100% of those with arthritis or late neurologic involvement (stage 3) had positive results. Using new IgG criteria, in which only the VlsE band was scored as a second-tier test among patients with early LD (stage 1 or 2) and 5 of 11 IgG bands were required in those with stage 3 LD, 34% of patients with stage 1, 96% of those with stage 2, and 100% of those with stage 3 infection had positive responses. Both new and standard testing achieved 100% specificity. Compared with standard IgM and IgG testing, the new IgG algorithm (with VlsE band) eliminates the need for IgM testing; it provides comparable or better sensitivity, and it maintains high specificity.

  3. CMS computing support at JINR

    International Nuclear Information System (INIS)

    Golutvin, I.; Koren'kov, V.; Lavrent'ev, A.; Pose, R.; Tikhonenko, E.

    1998-01-01

    Participation of JINR specialists in the CMS experiment at LHC requires a wide use of computer resources. In the context of JINR activities in the CMS Project hardware and software resources have been provided for full participation of JINR specialists in the CMS experiment; the JINR computer infrastructure was made closer to the CERN one. JINR also provides the informational support for the CMS experiment (web-server http://sunct2.jinr.dubna.su). Plans for further CMS computing support at JINR are stated

  4. A tiered approach for the human health risk assessment for consumption of vegetables from with cadmium-contaminated land in urban areas

    International Nuclear Information System (INIS)

    Swartjes, Frank A.; Versluijs, Kees W.; Otte, Piet F.

    2013-01-01

    Consumption of vegetables that are grown in urban areas takes place worldwide. In developing countries, vegetables are traditionally grown in urban areas for cheap food supply. In developing and developed countries, urban gardening is gaining momentum. A problem that arises with urban gardening is the presence of contaminants in soil, which can be taken up by vegetables. In this study, a scientifically-based and practical procedure has been developed for assessing the human health risks from the consumption of vegetables from cadmium-contaminated land. Starting from a contaminated site, the procedure follows a tiered approach which is laid out as follows. In Tier 0, the plausibility of growing vegetables is investigated. In Tier 1 soil concentrations are compared with the human health-based Critical soil concentration. Tier 2 offers the possibility for a detailed site-specific human health risk assessment in which calculated exposure is compared to the toxicological reference dose. In Tier 3, vegetable concentrations are measured and tested following a standardized measurement protocol. To underpin the derivation of the Critical soil concentrations and to develop a tool for site-specific assessment the determination of the representative concentration in vegetables has been evaluated for a range of vegetables. The core of the procedure is based on Freundlich-type plant–soil relations, with the total soil concentration and the soil properties as variables. When a significant plant–soil relation is lacking for a specific vegetable a geometric mean of BioConcentrationFactors (BCF) is used, which is normalized according to soil properties. Subsequently, a ‘conservative’ vegetable-group-consumption-rate-weighted BioConcentrationFactor is calculated as basis for the Critical soil concentration (Tier 1). The tool to perform site-specific human health risk assessment (Tier 2) includes the calculation of a ‘realistic worst case’ site-specific vegetable

  5. A tiered approach for the human health risk assessment for consumption of vegetables from with cadmium-contaminated land in urban areas

    Energy Technology Data Exchange (ETDEWEB)

    Swartjes, Frank A., E-mail: frank.swartjes@rivm.nl; Versluijs, Kees W.; Otte, Piet F.

    2013-10-15

    Consumption of vegetables that are grown in urban areas takes place worldwide. In developing countries, vegetables are traditionally grown in urban areas for cheap food supply. In developing and developed countries, urban gardening is gaining momentum. A problem that arises with urban gardening is the presence of contaminants in soil, which can be taken up by vegetables. In this study, a scientifically-based and practical procedure has been developed for assessing the human health risks from the consumption of vegetables from cadmium-contaminated land. Starting from a contaminated site, the procedure follows a tiered approach which is laid out as follows. In Tier 0, the plausibility of growing vegetables is investigated. In Tier 1 soil concentrations are compared with the human health-based Critical soil concentration. Tier 2 offers the possibility for a detailed site-specific human health risk assessment in which calculated exposure is compared to the toxicological reference dose. In Tier 3, vegetable concentrations are measured and tested following a standardized measurement protocol. To underpin the derivation of the Critical soil concentrations and to develop a tool for site-specific assessment the determination of the representative concentration in vegetables has been evaluated for a range of vegetables. The core of the procedure is based on Freundlich-type plant–soil relations, with the total soil concentration and the soil properties as variables. When a significant plant–soil relation is lacking for a specific vegetable a geometric mean of BioConcentrationFactors (BCF) is used, which is normalized according to soil properties. Subsequently, a ‘conservative’ vegetable-group-consumption-rate-weighted BioConcentrationFactor is calculated as basis for the Critical soil concentration (Tier 1). The tool to perform site-specific human health risk assessment (Tier 2) includes the calculation of a ‘realistic worst case’ site-specific vegetable

  6. CMS rewards eight of its suppliers

    CERN Multimedia

    2002-01-01

    At the third awards ceremony to honour its top suppliers, the CMS collaboration presented awards to eight firms. Seven of them are involved in the manufacture of the magnet. The winners of the third CMS suppliers' awards visit the assembly site for the detector. Unsurprisingly, the CMS magnet was once again in the limelight at the third awards ceremony in honour of the collaboration's top suppliers. 'Unsurprisingly', because this magnet, which must produce an intense field of 4 Tesla inside an enormous volume (12 metres in diameter and 13 metres in length) is the detector's key component. As a result, many firms are involved in its construction. The CMS suppliers' awards are an annual event aimed at rewarding the exceptional efforts of certain companies. Firms are only eligible once they have delivered at least 50% of their supplies. This year, the collaboration honoured eight firms at a ceremony held on Monday 4 March in the main auditorium. Seven of th...

  7. CMS MANAGEMENT MEETINGS

    CERN Multimedia

    Management Board Agendas and minutes of meetings of the Management Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223 Collaboration Board Agendas and minutes of meetings of the Collaboration Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174 LHCC: Feedback from the CMS Referees, LHCC 97 February 25, 2009. The CMS LHCC referees met with representatives of CMS on 17-2-09, to review progress since the last November minireview. The main topics included  shutdown construction, maintenance and repairs;  status of the preshower detector; commissioning and physics analysis results from cosmic ray running and CSA08;   preparations for physics, off line analysis, computing, and data distribution. TOTEM management and the TOTEM referees then joined us for a joint session to examine the readiness of the TOTEM detector. Detector construction, maintenance, and repairs. The referees congratulate C...

  8. CMS MANAGEMENT MEETINGS

    CERN Multimedia

    Jim Virdee

    Management Board Agendas and minutes of meetings of the Management Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223 Collaboration Board Agendas and minutes of meetings of the Collaboration Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174 LHCC: Feedback from the CMS Referees, LHCC 97 February 25, 2009. The CMS LHCC referees met with representatives of CMS on 17-2-09, to review progress since the last November minireview. The main topics included  shutdown construction, maintenance and repairs;  status of the preshower detector; commissioning and physics analysis results from cosmic ray running and CSA08;   preparations for physics, off line analysis, computing, and data distribution. TOTEM management and the TOTEM referees then joined us for a joint session to examine the readiness of the TOTEM detector. Detector construction, maintenance, and repairs. The referees congratula...

  9. Extending the farm on external sites: the INFN Tier-1 experience

    Science.gov (United States)

    Boccali, T.; Cavalli, A.; Chiarelli, L.; Chierici, A.; Cesini, D.; Ciaschini, V.; Dal Pra, S.; dell'Agnello, L.; De Girolamo, D.; Falabella, A.; Fattibene, E.; Maron, G.; Prosperini, A.; Sapunenko, V.; Virgilio, S.; Zani, S.

    2017-10-01

    The Tier-1 at CNAF is the main INFN computing facility offering computing and storage resources to more than 30 different scientific collaborations including the 4 experiments at the LHC. It is also foreseen a huge increase in computing needs in the following years mainly driven by the experiments at the LHC (especially starting with the run 3 from 2021) but also by other upcoming experiments such as CTA[1] While we are considering the upgrade of the infrastructure of our data center, we are also evaluating the possibility of using CPU resources available in other data centres or even leased from commercial cloud providers. Hence, at INFN Tier-1, besides participating to the EU project HNSciCloud, we have also pledged a small amount of computing resources (˜ 2000 cores) located at the Bari ReCaS[2] for the WLCG experiments for 2016 and we are testing the use of resources provided by a commercial cloud provider. While the Bari ReCaS data center is directly connected to the GARR network[3] with the obvious advantage of a low latency and high bandwidth connection, in the case of the commercial provider we rely only on the General Purpose Network. In this paper we describe the set-up phase and the first results of these installations started in the last quarter of 2015, focusing on the issues that we have had to cope with and discussing the measured results in terms of efficiency.

  10. 26 CFR 1.1446-5 - Tiered partnership structures.

    Science.gov (United States)

    2010-04-01

    ... defined in § 1.1446-4(b)(1)). (2) Lower-tier publicly traded partnership. The look through rules of... 26 Internal Revenue 12 2010-04-01 2010-04-01 false Tiered partnership structures. 1.1446-5 Section...-Free Covenant Bonds § 1.1446-5 Tiered partnership structures. (a) In general. The rules of this section...

  11. CMS Collaboration

    International Nuclear Information System (INIS)

    Faridah Mohammad Idris; Wan Ahmad Tajuddin Wan Abdullah; Zainol Abidin Ibrahim

    2013-01-01

    Full-text: CMS Collaboration is an international scientific collaboration located at European Organization for Nuclear Research (CERN), Switzerland, dedicated in carried out research on experimental particle physics. Consisting of 179 institutions from 41 countries from all around the word, CMS Collaboration host a general purpose detector for example the Compact Muon Solenoid (CMS) for members in CMS Collaboration to conduct experiment from the collision of two proton beams accelerated to a speed of 8 TeV in the LHC ring. In this paper, we described how the CMS detector is used by the scientist in CMS Collaboration to reconstruct the most basic building of matter. (author)

  12. Seasonal maximum temperature prediction skill over Southern Africa: 1- vs 2-tiered forecasting systems

    CSIR Research Space (South Africa)

    Lazenby, MJ

    2011-09-01

    Full Text Available TEMPERATURE PREDICTION SKILL OVER SOUTHERN AFRICA: 1- VS. 2-TIERED FORECASTING SYSTEMS Melissa J. Lazenby University of Pretoria, Private Bag X20, Pretoria, 0028, South Africa Willem A. Landman Council for Scientific and Industrial....J., Tyson, P.D. and Tennant, W.J., 2001. Retro-active skill of multi- tiered forecasts of summer rainfall over southern Africa. International Journal of Climatology, 21, 1- 19. Mason, S.J. and Graham, N.E., 2002. Areas beneath the relative operating...

  13. CMS Detector Posters

    CERN Multimedia

    2016-01-01

    CMS Detector posters (produced in 2000): CMS installation CMS collaboration From the Big Bang to Stars LHC Magnetic Field Magnet System Trackering System Tracker Electronics Calorimetry Eletromagnetic Calorimeter Hadronic Calorimeter Muon System Muon Detectors Trigger and data aquisition (DAQ) ECAL posters (produced in 2010, FR & EN): CMS ECAL CMS ECAL-Supermodule cooling and mechatronics CMS ECAL-Supermodule assembly

  14. CMS Dashboard Task Monitoring: A user-centric monitoring view

    International Nuclear Information System (INIS)

    Karavakis, Edward; Khan, Akram; Andreeva, Julia; Maier, Gerhild; Gaidioz, Benjamin

    2010-01-01

    We are now in a phase change of the CMS experiment where people are turning more intensely to physics analysis and away from construction. This brings a lot of challenging issues with respect to monitoring of the user analysis. The physicists must be able to monitor the execution status, application and grid-level messages of their tasks that may run at any site within the CMS Virtual Organisation. The CMS Dashboard Task Monitoring project provides this information towards individual analysis users by collecting and exposing a user-centric set of information regarding submitted tasks including reason of failure, distribution by site and over time, consumed time and efficiency. The development was user-driven with physicists invited to test the prototype in order to assemble further requirements and identify weaknesses with the application.

  15. Science on Drupal: An evaluation of CMS Technologies

    Science.gov (United States)

    Vinay, S.; Gonzalez, A.; Pinto, A.; Pascuzzi, F.; Gerard, A.

    2011-12-01

    We conducted an extensive evaluation of various Content Management System (CMS) technologies for implementing different websites supporting interdisciplinary science data and information. We chose two products, Drupal and Bluenog/Hippo CMS, to meet our specific needs and requirements. Drupal is an open source product that is quick and easy to setup and use. It is a very mature, stable, and widely used product. It has rich functionality supported by a large and active user base and developer community. There are many plugins available that provide additional features for managing citations, map gallery, semantic search, digital repositories (fedora), scientific workflows, collaborative authoring, social networking, and other functions. All of these work very well within the Drupal framework if minimal customization is needed. We have successfully implemented Drupal for multiple projects such as: 1) the Haiti Regeneration Initiative (http://haitiregeneration.org/); 2) the Consortium on Climate Risk in the Urban Northeast (http://beta.ccrun.org/); and 3) the Africa Soils Information Service (http://africasoils.net/). We are also developing two other websites, the Côte Sud Initiative (CSI) and Emerging Infectious Diseases, using Drupal. We are testing the Drupal multi-site install for managing different websites with one install to streamline the maintenance. In addition, paid support and consultancy for Drupal website development are available at affordable prices. All of these features make Drupal very attractive for implementing state-of-the-art scientific websites that do not have complex requirements. One of our major websites, the NASA Socioeconomic Data and Applications Center (SEDAC), has a very complex set of requirements. It has to easily re-purpose content across multiple web pages and sites with different presentations. It has to serve the content via REST or similar standard interfaces so that external client applications can access content in the CMS

  16. CMS Factsheet

    CERN Multimedia

    Lapka, Marzena; Rao, Achintya

    2016-01-01

    CMS Factsheets: containing facts about the CMS collaboration and detector. Printed copies of the English version are available from the CMS Secretariat. Responsible for translations: English only - E.Gibney (updated 2015)

  17. Multi-core processing and scheduling performance in CMS

    International Nuclear Information System (INIS)

    Hernández, J M; Evans, D; Foulkes, S

    2012-01-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.

  18. 38 CFR 36.4318 - Servicer tier ranking-temporary procedures.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Servicer tier ranking... § 36.4318 Servicer tier ranking—temporary procedures. (a) The Secretary shall assign to each servicer a “Tier Ranking” based upon the servicer's performance in servicing guaranteed loans. There shall be four...

  19. Genomic sequencing in cystic fibrosis newborn screening: what works best, two-tier predefined CFTR mutation panels or second-tier CFTR panel followed by third-tier sequencing?

    Science.gov (United States)

    Currier, Robert J; Sciortino, Stan; Liu, Ruiling; Bishop, Tracey; Alikhani Koupaei, Rasoul; Feuchtbaum, Lisa

    2017-10-01

    PurposeThe purpose of this study was to model the performance of several known two-tier, predefined mutation panels and three-tier algorithms for cystic fibrosis (CF) screening utilizing the ethnically diverse California population.MethodsThe cystic fibrosis transmembrane conductance regulator (CFTR) mutations identified among the 317 CF cases in California screened between 12 August 2008 and 18 December 2012 were used to compare the expected CF detection rates for several two- and three-tier screening approaches, including the current California approach, which consists of a population-specific 40-mutation panel followed by third-tier sequencing when indicated.ResultsThe data show that the strategy of using third-tier sequencing improves CF detection following an initial elevated immunoreactive trypsinogen and detection of only one mutation on a second-tier panel.ConclusionIn a diverse population, the use of a second-tier panel followed by third-tier CFTR gene sequencing provides a better detection rate for CF, compared with the use of a second-tier approach alone, and is an effective way to minimize the referrals of CF carriers for sweat testing. Restricting screening to a second-tier testing to predefined mutation panels, even broad ones, results in some missed CF cases and demonstrates the limited utility of this approach in states that have diverse multiethnic populations.

  20. CMS-Wave

    Science.gov (United States)

    2015-10-30

    Coastal Inlets Research Program CMS -Wave CMS -Wave is a two-dimensional spectral wind-wave generation and transformation model that employs a forward...marching, finite-difference method to solve the wave action conservation equation. Capabilities of CMS -Wave include wave shoaling, refraction... CMS -Wave can be used in either on a half- or full-plane mode, with primary waves propagating from the seaward boundary toward shore. It can

  1. Phospholipase A2 activity-dependent and -independent fusogenic activity of Naja nigricollis CMS-9 on zwitterionic and anionic phospholipid vesicles.

    Science.gov (United States)

    Chiou, Yi-Ling; Chen, Ying-Jung; Lin, Shinne-Ren; Chang, Long-Sen

    2011-11-01

    CMS-9, a phospholipase A(2) (PLA(2)) from Naja nigricollis venom, induced the death of human breast cancer MCF-7 cells accompanied with the formation of cell clumps without clear boundaries between cells. Annexin V-FITC staining indicated that abundant phosphatidylserine appeared on the outer membrane of MCF-7 cell clumps, implying the possibility that CMS-9 may promote membrane fusion via anionic phospholipids. To validate this proposition, fusogenic activity of CMS-9 on vesicles composed of zwitterionic phospholipid alone or a combination of zwitterionic and anionic phospholipids was examined. Although CMS-9-induced fusion of zwitterionic phospholipid vesicles depended on PLA(2) activity, CMS-9-induced fusion of vesicles containing anionic phospholipids could occur without the involvement of PLA(2) activity. Membrane-damaging activity of CMS-9 was associated with its fusogenicity. Moreover, CMS-9 induced differently membrane leakage and membrane fusion of vesicles with different compositions. Membrane fluidity and binding capability with phospholipid vesicles were not related to the fusogenicity of CMS-9. However, membrane-bound conformation and mode of CMS-9 depended on phospholipid compositions. Collectively, our data suggest that PLA(2) activity-dependent and -independent fusogenicity of CMS-9 are closely related to its membrane-bound modes and targeted membrane compositions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Tracking performance with cosmic rays in CMS

    International Nuclear Information System (INIS)

    Cerati, G.B.

    2009-01-01

    The CMS Tracker is the biggest all-silicon detector in the world and is designed to be extremely efficient and accurate even in a very hostile environment such as the one close to the CMS collision point. It consists of an inner pixel detector, made of three barrel layers (48M pixels) and four forward disks (16M pixels), and an outer micro-strip detector, divided in two barrel sub-detectors, TIB and TOB, and two endcap sub-detectors, TID and TEC, for a total of 9.6M strips. The commissioning of the CMS Tracker detector has been initially carried out at the Tracker Integration Facility at CERN (TIF), where cosmic ray data were collected for the strip detector only, and is still ongoing at the CMS site (LHC Point 5). Here the Strip and Pixel detectors have been installed in the experiment and are taking part to the cosmic global-runs. After an overview of the tracking algorithms for cosmic-ray data reconstruction, the resulting tracking performance on cosmic data both at TIF and at P5 are presented. The excellent performance proves that the CMS Tracker is ready for the first collisions foreseen for 2009.

  3. Analysing CMS transfers using Machine Learning techniques

    CERN Document Server

    Diotalevi, Tommaso

    2016-01-01

    LHC experiments transfer more than 10 PB/week between all grid sites using the FTS transfer service. In particular, CMS manages almost 5 PB/week of FTS transfers with PhEDEx (Physics Experiment Data Export). FTS sends metrics about each transfer (e.g. transfer rate, duration, size) to a central HDFS storage at CERN. The work done during these three months, here as a Summer Student, involved the usage of ML techniques, using a CMS framework called DCAFPilot, to process this new data and generate predictions of transfer latencies on all links between Grid sites. This analysis will provide, as a future service, the necessary information in order to proactively identify and maybe fix latency issued transfer over the WLCG.

  4. Pharmacokinetics of Colistin Methansulphonate (CMS) and Colistin after CMS Nebulisation in Baboon Monkeys.

    Science.gov (United States)

    Marchand, Sandrine; Bouchene, Salim; de Monte, Michèle; Guilleminault, Laurent; Montharu, Jérôme; Cabrera, Maria; Grégoire, Nicolas; Gobin, Patrice; Diot, Patrice; Couet, William; Vecellio, Laurent

    2015-10-01

    The objective of this study was to compare two different nebulizers: Eflow rapid® and Pari LC star® by scintigraphy and PK modeling to simulate epithelial lining fluid concentrations from measured plasma concentrations, after nebulization of CMS in baboons. Three baboons received CMS by IV infusion and by 2 types of aerosols generators and colistin by subcutaneous infusion. Gamma imaging was performed after nebulisation to determine colistin distribution in lungs. Blood samples were collected during 9 h and colistin and CMS plasma concentrations were measured by LC-MS/MS. A population pharmacokinetic analysis was conducted and simulations were performed to predict lung concentrations after nebulization. Higher aerosol distribution into lungs was observed by scintigraphy, when CMS was nebulized with Pari LC® star than with Eflow Rapid® nebulizer. This observation was confirmed by the fraction of CMS deposited into the lung (respectively 3.5% versus 1.3%).CMS and colistin simulated concentrations in epithelial lining fluid were higher after using the Pari LC star® than the Eflow rapid® system. A limited fraction of CMS reaches lungs after nebulization, but higher colistin plasma concentrations were measured and higher intrapulmonary colistin concentrations were simulated with the Pari LC Star® than with the Eflow Rapid® system.

  5. Mesocosm soil ecological risk assessment tool for GMO 2nd tier studies

    DEFF Research Database (Denmark)

    D'Annibale, Alessandra; Maraldo, Kristine; Larsen, Thomas

    Ecological Risk Assessment (ERA) of GMO is basically identical to ERA of chemical substances, when it comes to assessing specific effects of the GMO plant material on the soil ecosystem. The tiered approach always includes the option of studying more complex but still realistic ecosystem level...... effects in 2nd tier caged experimental systems, cf. the new GMO ERA guidance: EFSA Journal 2010; 8(11):1879. We propose to perform a trophic structure analysis, TSA, and include the trophic structure as an ecological endpoint to gain more direct insight into the change in interactions between species, i.......e. the food-web structure, instead of relying only on the indirect evidence from population abundances. The approach was applied for effect assessment in the agro-ecosystem where we combined factors of elevated CO2, viz. global climate change, and GMO plant effects. A multi-species (Collembola, Acari...

  6. First Run 2 Searches for Exotica at CMS

    CERN Document Server

    Başeğmez du Pree, S

    2016-01-01

    An overview of the first results of the experimental searches for exotica at the CMS experiment with 13 TeV collision data is presented. The results cover various models with different topologies such as searches for new heavy resonances, extra space dimensions, black holes and dark matter. The analysis results with 13 TeV data are emphasized, corresponding to an integrated luminosity in the range of 2.1–2.8 fb

  7. Phase 2 environmental site assessment SW corner of plan 954 GV, block 7 : Main Street, Turner Valley, AB

    International Nuclear Information System (INIS)

    Reinson, E.; Harvey, K.; Brooks, N.; Burton, K.; Reinson, K.; Cropley, K.

    2010-05-01

    This document described the second phase of an environmental site assessment (ESA) at a parking lot in front of the Flare and Derrik on Main Street in Turner Valley, Alberta. The objective of this ESA was to confirm the presence of any substances of concern. The site has been occupied by an outdoor ice rink, and there is pipeline right-of-way along the east portion of the site. There is also an abandoned crude oil pipeline along the east portion of the property and an abandoned natural gas pipeline on the west side of the property. This ESA investigated the impact of the pipelines, the underground storage tanks on the adjacent sites, and the oil and gas lease on the adjacent site. Seven exploratory testholes were drilled and 3 monitoring wells were installed. The study involved soil inspection and field VOC measurements every 60 centimeters, or as required. Groundwater sampling and surveying was also performed. Groundwater and soil samples were analyzed for hydrocarbons, polycyclic aromatic hydrocarbons (PAHs), salinity and metals. All the results for soil were below Tier 1 guidelines. There appeared to be a layer of clay fill which had some hydrocarbons, PAHs and barium present, but they were below the applicable criteria. In terms of groundwater, PAHs and selenium exceeded the Tier 1 guidelines in monitoring well no. 4 (MW4). In another well, manganese exceeded the Tier 1 guidelines. Copper exceeded the Tier 1 guidelines in all three wells. All the results were below the Tier 2 guidelines in all 3 wells with the exception of carcinogenic PAHs (B(a)P TPE) in MW4. Due to the minor presence of hydrocarbons (below criteria) located in the surficial soil layer, Ballast Environmental Consultants recommended that an environmental professional be present during the excavation and that further soil samples be taken from the suspect layer to confirm all the soils in the area of the library are below the applicable criteria. refs., tabs., figs.

  8. Phase 2 environmental site assessment SW corner of plan 954 GV, block 7 : Main Street, Turner Valley, AB

    Energy Technology Data Exchange (ETDEWEB)

    Reinson, E.; Harvey, K.; Brooks, N.; Burton, K.; Reinson, K.; Cropley, K. [Ballast Environmental Consultants Ltd., Calgary, AB (Canada)

    2010-05-15

    This document described the second phase of an environmental site assessment (ESA) at a parking lot in front of the Flare and Derrik on Main Street in Turner Valley, Alberta. The objective of this ESA was to confirm the presence of any substances of concern. The site has been occupied by an outdoor ice rink, and there is pipeline right-of-way along the east portion of the site. There is also an abandoned crude oil pipeline along the east portion of the property and an abandoned natural gas pipeline on the west side of the property. This ESA investigated the impact of the pipelines, the underground storage tanks on the adjacent sites, and the oil and gas lease on the adjacent site. Seven exploratory testholes were drilled and 3 monitoring wells were installed. The study involved soil inspection and field VOC measurements every 60 centimeters, or as required. Groundwater sampling and surveying was also performed. Groundwater and soil samples were analyzed for hydrocarbons, polycyclic aromatic hydrocarbons (PAHs), salinity and metals. All the results for soil were below Tier 1 guidelines. There appeared to be a layer of clay fill which had some hydrocarbons, PAHs and barium present, but they were below the applicable criteria. In terms of groundwater, PAHs and selenium exceeded the Tier 1 guidelines in monitoring well no. 4 (MW4). In another well, manganese exceeded the Tier 1 guidelines. Copper exceeded the Tier 1 guidelines in all three wells. All the results were below the Tier 2 guidelines in all 3 wells with the exception of carcinogenic PAHs (B(a)P TPE) in MW4. Due to the minor presence of hydrocarbons (below criteria) located in the surficial soil layer, Ballast Environmental Consultants recommended that an environmental professional be present during the excavation and that further soil samples be taken from the suspect layer to confirm all the soils in the area of the library are below the applicable criteria. refs., tabs., figs.

  9. Mitochondrial nad2 gene is co-transcripted with CMS-associated orfB gene in cytoplasmic male-sterile stem mustard (Brassica juncea).

    Science.gov (United States)

    Yang, Jing-Hua; Zhang, Ming-Fang; Yu, Jing-Quan

    2009-02-01

    The transcriptional patterns of mitochondrial respiratory related genes were investigated in cytoplasmic male-sterile and fertile maintainer lines of stem mustard, Brassica juncea. There were numerous differences in nad2 (subunit 2 of NADH dehydrogenase) between stem mustard CMS and its maintainer line. One novel open reading frame, hereafter named orfB gene, was located at the downstream of mitochondrial nad2 gene in the CMS. The novel orfB gene had high similarity with YMF19 family protein, orfB in Raphanus sativus, Helianthus annuus, Nicotiana tabacum and Beta vulgaris, orfB-CMS in Daucus carota, atp8 gene in Arabidopsis thaliana, 5' flanking of orf224 in B. napus (nap CMS) and 5' flanking of orf220 gene in CMS Brassica juncea. Three copies probed by specific fragment (amplified by primers of nad2F and nad2R from CMS) were found in the CMS line following Southern blotting digested with HindIII, but only a single copy in its maintainer line. Meanwhile, two transcripts were shown in the CMS line following Northern blotting while only one transcript was detected in the maintainer line, which were probed by specific fragment (amplified by primers of nad2F and nad2R from CMS). Meanwhile, the expression of nad2 gene was reduced in CMS bud compared to that in its maintainer line. We thus suggested that nad2 gene may be co-transcripted with CMS-associated orfB gene in the CMS. In addition, the specific fragment that was amplified by primers of nad2F and nad2R just spanned partial sequences of nad2 gene and orfB gene. Such alterations in the nad2 gene would impact the activity of NADH dehydrogenase, and subsequently signaling, inducing the expression of nuclear genes involved in male sterility in this type of cytoplasmic male sterility.

  10. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    Science.gov (United States)

    Meyer, Jörg; Quadt, Arnulf; Weber, Pavel; ATLAS Collaboration

    2011-12-01

    GoeGrid is a grid resource center located in Göttingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields of grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community, GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and personpower resources.

  11. CMS AWARDS

    CERN Multimedia

    Steven Lowette

    Working under great time pressure towards a common goal in gradual steps can sometimes cause us to forget to take a step back, and celebrate what marvels have been achieved. A general need was felt within CMS to expand the recognition for our young scientists that made outstanding, well recognized and creative contributions to CMS, which served to significantly advance the performance of CMS as a complete and powerful experiment. Therefore, the Collaboration Board endorsed in March 2009 a proposal from the CB Chair and Advisory Group to award each year the newly created "CMS Achievement Award" to fourteen graduate students and postdocs that made exceptional contributions to the Tracker, ECAL, HCAL and Muon subdetectors as well as the TriDAS project, the Commissioning of CMS and the Offline Software and Computing projects. It was also agreed that there was a need to go back in time, and retroactively attribute awards for the years 2007 and 2008 when CMS went from a bare cavern to a detect...

  12. Future Approach to tier-0 extension

    Science.gov (United States)

    Jones, B.; McCance, G.; Cordeiro, C.; Giordano, D.; Traylen, S.; Moreno García, D.

    2017-10-01

    The current tier-0 processing at CERN is done on two managed sites, the CERN computer centre and the Wigner computer centre. With the proliferation of public cloud resources at increasingly competitive prices, we have been investigating how to transparently increase our compute capacity to include these providers. The approach taken has been to integrate these resources using our existing deployment and computer management tools and to provide them in a way that exposes them to users as part of the same site. The paper will describe the architecture, the toolset and the current production experiences of this model.

  13. CMS end-cap yoke at the detector's assembly site.

    CERN Multimedia

    Patrice Loïez

    2002-01-01

    The magnetic flux generated by the superconducting coil in the CMS detector is returned via an iron yoke comprising three end-cap discs at each end (end-cap yoke) and five concentric cylinders (barrel yoke). This picture shows the first of three end-cap discs (red) seen through the outer cylinder of the vacuum tank which will house the superconducting coil.

  14. RE-AIM Checklist for Integrating and Sustaining Tier 2 Social-Behavioral Interventions

    Science.gov (United States)

    Cheney, Douglas A.; Yong, Minglee

    2014-01-01

    Even though evidence-based Tier 2 programs are now more commonly available, integrating and sustaining these interventions in schools remain challenging. RE-AIM, which stands for Reach, Effectiveness, Adoption, Implementation, and Maintenance, is a public health framework used to maximize the effectiveness of health promotion programs in…

  15. CMS overview

    CERN Document Server

    AUTHOR|(CDS)2071615

    2016-01-01

    Most recent CMS data related to the high-density QCD are presented for pp and PbPb collisions at 2.76 TeV and pPb collisions at 5.02 TeV. The PbPb collision is essential to understand collective behavior and the final-state effects for the detailed characteristics of hot, dense partonic matter, whereas the pPb collision provides the critical information on the initial-state effects including the modification of the parton distribution function in cold nuclei. This paper highlights some of recent heavy-ion related results from CMS.

  16. 42 CFR 401.108 - CMS rulings.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false CMS rulings. 401.108 Section 401.108 Public Health... GENERAL ADMINISTRATIVE REQUIREMENTS Confidentiality and Disclosure § 401.108 CMS rulings. (a) After... regulations, but which has been adopted by CMS as having precedent, may be published in the Federal Register...

  17. EDSP Tier 2 test (T2T) guidances and protocols are delivered, including web-based guidance for diagnosing and scoring, and evaluating EDC-induced pathology in fish and amphibian

    Science.gov (United States)

    The Agency’s Endocrine Disruptor Screening Program (EDSP) consists of two tiers. The first tier provides information regarding whether a chemical may have endocrine disruption properties. Tier 2 tests provide confirmation of ED effects and dose-response information to be us...

  18. CMS General Poster 2009 : to raise awareness of CMS, the CMS detector, its parts and people

    CERN Multimedia

    CMS outreach

    2012-01-01

    A poster which is identical to the two inside pages of the CMS brochure. The poster contains an image of a cross section of the CMS detector, explanation of detector parts, the aims of the CMS experiment and numbers of scientists and institutions associated with the experiment.

  19. The Effect of Tier 2 Intervention for Phonemic Awareness in a Response-to-Intervention Model in Low-Income Preschool Classrooms

    Science.gov (United States)

    Koutsoftas, Anthony D.; Harmon, Mary Towle; Gray, Shelley

    2009-01-01

    Purpose: This study assessed the effectiveness of a Tier 2 intervention that was designed to increase the phonemic awareness skills of low-income preschoolers who were enrolled in Early Reading First classrooms. Method: Thirty-four preschoolers participated in a multiple baseline across participants treatment design. Tier 2 intervention for…

  20. Evaluation of NASA's Carbon Monitoring System (CMS) Flux Pilot: Terrestrial CO2 Fluxes

    Science.gov (United States)

    Fisher, J. B.; Polhamus, A.; Bowman, K. W.; Collatz, G. J.; Potter, C. S.; Lee, M.; Liu, J.; Jung, M.; Reichstein, M.

    2011-12-01

    NASA's Carbon Monitoring System (CMS) flux pilot project combines NASA's Earth System models in land, ocean and atmosphere to track surface CO2 fluxes. The system is constrained by atmospheric measurements of XCO2 from the Japanese GOSAT satellite, giving a "big picture" view of total CO2 in Earth's atmosphere. Combining two land models (CASA-Ames and CASA-GFED), two ocean models (ECCO2 and NOBM) and two atmospheric chemistry and inversion models (GEOS-5 and GEOS-Chem), the system brings together the stand-alone component models of the Earth System, all of which are run diagnostically constrained by a multitude of other remotely sensed data. Here, we evaluate the biospheric land surface CO2 fluxes (i.e., net ecosystem exchange, NEE) as estimated from the atmospheric flux inversion. We compare against the prior bottom-up estimates (e.g., the CASA models) as well. Our evaluation dataset is the independently derived global wall-to-wall MPI-BGC product, which uses a machine learning algorithm and model tree ensemble to "scale-up" a network of in situ CO2 flux measurements from 253 globally-distributed sites in the FLUXNET network. The measurements are based on the eddy covariance method, which uses observations of co-varying fluxes of CO2 (and water and energy) from instruments on towers extending above ecosystem canopies; the towers integrate fluxes over large spatial areas (~1 km2). We present global maps of CO2 fluxes and differences between products, summaries of fluxes by TRANSCOM region, country, latitude, and biome type, and assess the time series, including timing of minimum and maximum fluxes. This evaluation shows both where the CMS is performing well, and where improvements should be directed in further work.

  1. LHC(ATLAS, CMS, LHCb) Run 2 commissioning status

    CERN Document Server

    Zimmermann, Stephanie; The ATLAS collaboration

    2015-01-01

    After a very successful run-1, the LHC accelerator and the LHC experiments had undergone intensive consolidation, maintenance and upgrade activities during the last 2 years in what has become known as Long-Shutdown-1 (LS1). LS1 ended in February this year, with beams back in the LHC since Easter. This talk will give a summary on the major shutdown activities of ATLAS, CMS and LHCb and review the status of commissioning for run-2 physics data taking.

  2. Lambda Station: Alternate network path forwarding for production SciDAC applications

    International Nuclear Information System (INIS)

    Grigoriev, Maxim; Bobyshev, Andrey; Crawford, Matt; DeMar, Phil; Grigaliunas, Vyto; Moibenko, Alexander; Petravick, Don; Newman, Harvey; Steenberg, Conrad; Thomas, Michael

    2007-01-01

    The LHC era will start very soon, creating immense data volumes capable of demanding allocation of an entire network circuit for task-driven applications. Circuit-based alternate network paths are one solution to meeting the LHC high bandwidth network requirements. The Lambda Station project is aimed at addressing growing requirements for dynamic allocation of alternate network paths. Lambda Station facilitates the rerouting of designated traffic through site LAN infrastructure onto so-called 'high-impact' wide-area networks. The prototype Lambda Station developed with Service Oriented Architecture (SOA) approach in mind will be presented. Lambda Station has been successfully integrated into the production version of the Storage Resource Manager (SRM), and deployed at US CMS Tier1 center at Fermilab, as well as at US-CMS Tier-2 site at Caltech. This paper will discuss experiences using the prototype system with production SciDAC applications for data movement between Fermilab and Caltech. The architecture and design principles of the production version Lambda Station software, currently being implemented as Java based web services, will also be presented in this paper

  3. Success in the pipeline for CMS

    CERN Multimedia

    2008-01-01

    The very heart of any LHC experiment is not a pixel detector, nor a vertex locator but a beam pipe. It is the site of each collision and the boundary where the accelerator and experiment meet. As an element of complex design and manufacture the CMS beam pipe was fifteen years in the making and finally fully installed on Tuesday 10 June. Watch the video! End cap beam pipe installation in the CMS detector. Central beam pipe installation.The compensation modules were the final pieces to take their places in the cavern at Point 5: "These are like bellows," says Wolfram Zeuner, Deputy Technical Co-ordinator for CMS. "They allow us to compensate for the change in length when we heat or cool the beam pipe. And they are the very last elements; beam pipe installation, which began last year, is now complete." The beam pipe is neither too fragile nor too bulky, but just right to satisfy the conflicting n...

  4. CMS Use of a Data Federation

    CERN Document Server

    Bloom, Kenneth Arthur

    2014-01-01

    CMS is in the process of deploying an Xrootd based infrastructure to facilitate a global data federation. The services of the federation are available to export data from half the physical capacity and the majority of sites are configured to read data over the federation as a back-up. CMS began with a relatively modest set of use-cases for recovery of failed local file opens, debugging and visualization. CMS is finding that the data federation can be used to support small scale analysis and load balancing. Looking forward we see potential in using the federation to provide more flexibility in the location workflows are executed as the differenced between local access and wide area access are diminished by optimization and improved networking. In this presentation we will discuss the application development work and the facility deployment work, the use-cases currently in production, and the potential for the technology moving forward.

  5. A systems relations model for Tier 2 early intervention child mental health services with schools: an exploratory study.

    Science.gov (United States)

    van Roosmalen, Marc; Gardner-Elahi, Catherine; Day, Crispin

    2013-01-01

    Over the last 15 years, policy initiatives have aimed at the provision of more comprehensive Child and Adolescent Mental Health care. These presented a series of new challenges in organising and delivering Tier 2 child mental health services, particularly in schools. This exploratory study aimed to examine and clarify the service model underpinning a Tier 2 child mental health service offering school-based mental health work. Using semi-structured interviews, clinician descriptions of operational experiences were gathered. These were analysed using grounded theory methods. Analysis was validated by respondents at two stages. A pathway for casework emerged that included a systemic consultative function, as part of an overall three-function service model, which required: (1) activity as a member of the multi-agency system; (2) activity to improve the system working around a particular child; and (3) activity to universally develop a Tier 1 workforce confident in supporting children at risk of or experiencing mental health problems. The study challenged the perception of such a service serving solely a Tier 2 function, the requisite workforce to deliver the service model, and could give service providers a rationale for negotiating service models that include an explicit focus on improving the children's environments.

  6. A new Information Architecture, Website and Services for the CMS Experiment

    Science.gov (United States)

    Taylor, Lucas; Rusack, Eleanor; Zemleris, Vidmantas

    2012-12-01

    The age and size of the CMS collaboration at the LHC means it now has many hundreds of inhomogeneous web sites and services, and hundreds of thousands of documents. We describe a major initiative to create a single coherent CMS internal and public web site. This uses the Drupal web Content Management System (now supported by CERN/IT) on top of a standard LAMP stack (Linux, Apache, MySQL, and php/perl). The new navigation, content and search services are coherently integrated with numerous existing CERN services (CDS, EDMS, Indico, phonebook, Twiki) as well as many CMS internal Web services. We describe the information architecture; the system design, implementation and monitoring; the document and content database; security aspects; and our deployment strategy, which ensured continual smooth operation of all systems at all times.

  7. A new Information Architecture, Website and Services for the CMS Experiment

    International Nuclear Information System (INIS)

    Taylor, Lucas; Rusack, Eleanor; Zemleris, Vidmantas

    2012-01-01

    The age and size of the CMS collaboration at the LHC means it now has many hundreds of inhomogeneous web sites and services, and hundreds of thousands of documents. We describe a major initiative to create a single coherent CMS internal and public web site. This uses the Drupal web Content Management System (now supported by CERN/IT) on top of a standard LAMP stack (Linux, Apache, MySQL, and php/perl). The new navigation, content and search services are coherently integrated with numerous existing CERN services (CDS, EDMS, Indico, phonebook, Twiki) as well as many CMS internal Web services. We describe the information architecture; the system design, implementation and monitoring; the document and content database; security aspects; and our deployment strategy, which ensured continual smooth operation of all systems at all times.

  8. A new information architecture, website and services for the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Lucas [Fermilab; Rusack, Eleanor [Fermilab; Zemleris, Vidmantas [Vilnius U.

    2012-01-01

    The age and size of the CMS collaboration at the LHC means it now has many hundreds of inhomogeneous web sites and services, and hundreds of thousands of documents. We describe a major initiative to create a single coherent CMS internal and public web site. This uses the Drupal web Content Management System (now supported by CERN/IT) on top of a standard LAMP stack (Linux, Apache, MySQL, and php/perl). The new navigation, content and search services are coherently integrated with numerous existing CERN services (CDS, EDMS, Indico, phonebook, Twiki) as well as many CMS internal Web services. We describe the information architecture, the system design, implementation and monitoring, the document and content database, security aspects, and our deployment strategy, which ensured continual smooth operation of all systems at all times.

  9. A Voyage to Arcturus: A model for automated management of a WLCG Tier-2 facility

    International Nuclear Information System (INIS)

    Roy, Gareth; Crooks, David; Mertens, Lena; Mitchell, Mark; Skipsey, Samuel Cadellin; Britton, David; Purdie, Stuart

    2014-01-01

    With the current trend towards 'On Demand Computing' in big data environments it is crucial that the deployment of services and resources becomes increasingly automated. Deployment based on cloud platforms is available for large scale data centre environments but these solutions can be too complex and heavyweight for smaller, resource constrained WLCG Tier-2 sites. Along with a greater desire for bespoke monitoring and collection of Grid related metrics, a more lightweight and modular approach is desired. In this paper we present a model for a lightweight automated framework which can be use to build WLCG grid sites, based on 'off the shelf' software components. As part of the research into an automation framework the use of both IPMI and SNMP for physical device management will be included, as well as the use of SNMP as a monitoring/data sampling layer such that more comprehensive decision making can take place and potentially be automated. This could lead to reduced down times and better performance as services are recognised to be in a non-functional state by autonomous systems.

  10. Prevalence of syphilis infection in different tiers of female sex workers in China: implications for surveillance and interventions

    Directory of Open Access Journals (Sweden)

    Chen Xiang-Sheng

    2012-04-01

    Full Text Available Abstract Background Syphilis has made a dramatic resurgence in China during the past two decades and become the third most prevalent notifiable infectious disease in China. Female sex workers (FSWs have become one of key populations for the epidemic. In order to investigate syphilis infection among different tiers of FSWs, a cross-sectional study was conducted in 8 sites in China. Methods Serum specimens (n = 7,118 were collected to test for syphilis and questionnaire interviews were conducted to obtain socio-demographic and behavioral information among FSWs recruited from different types of venues. FSWs were categorized into three tiers (high-, middle- and low-tier FSWs based on the venues where they solicited clients. Serum specimens were screened with enzyme-linked immunosorbent assay (ELISA for treponemal antibody followed by confirmation with non-treponemal toluidine red unheated serum test (TRUST for positive ELISA specimens to determine syphilis infection. A logistic regression model was used to determine factors associated with syphilis infection. Results Overall syphilis prevalence was 5.0% (95%CI, 4.5-5.5%. Low-tier FSWs had the highest prevalence (9.7%; 95%CI, 8.3-11.1%, followed by middle-tier (4.3%; 95%CI, 3.6-5.0%, P P Conclusions This multi-site survey showed a high prevalence of syphilis infection among FSWs and substantial disparities in syphilis prevalence by the tier of FSWs. The difference in syphilis prevalence is substantial between different tiers of FSWs, with the highest rate among low-tier FSWs. Thus, current surveillance and intervention activities, which have low coverage in low-tier FSWs in China, should be further examined.

  11. CMS Fast Facts

    Data.gov (United States)

    U.S. Department of Health & Human Services — CMS has developed a new quick reference statistical summary on annual CMS program and financial data. CMS Fast Facts includes summary information on total program...

  12. CMS collision events (7 TeV): candidate ZZ to 2e+2mu

    CERN Multimedia

    Taylor, L

    2011-01-01

    Real CMS proton-proton collisions events in which 2 high energy electrons and two high energy muons are observed. The event shows characteristics expected from the decay of a Higgs boson but is also consistent with background Standard Model physics processes. Prepared for EPS HEP 2011 Conference.

  13. CMS Security Handbook The Comprehensive Guide for WordPress, Joomla, Drupal, and Plone

    CERN Document Server

    Canavan, Tom

    2011-01-01

    Learn to secure Web sites built on open source CMSs Web sites built on Joomla!, WordPress, Drupal, or Plone face some unique security threats. If you're responsible for one of them, this comprehensive security guide, the first of its kind, offers detailed guidance to help you prevent attacks, develop secure CMS-site operations, and restore your site if an attack does occur. You'll learn a strong, foundational approach to CMS operations and security from an expert in the field.More and more Web sites are being built on open source CMSs, making them a popular target, thus making you vulnerable t

  14. Quantitative proteomic analysis of CMS-related changes in Honglian CMS rice anther.

    Science.gov (United States)

    Sun, Qingping; Hu, Chaofeng; Hu, Jun; Li, Shaoqing; Zhu, Yingguo

    2009-10-01

    Honglian (HL) cytoplasmic male sterility (CMS) is one of the rice CMS types and has been widely used in hybrid rice production in China. The CMS line (Yuetai A, YTA) has a Yuetai B (maintainer line, YTB) nuclear genome, but has a rearranged mitochondrial (mt) genome consisting of Yuetai B. The fertility of hybrid (HL-6) was restored by restorer gene in nuclear genome of restorer line (9311). We used isotope-code affinity tag (ICAT) technology to perform the protein profiling of uninucleate stage rice anther and identify the CMS-HL related proteins. Two separate ICAT analyses were performed in this study: (1) anthers from YTA versus anthers from YTB, and (2) anthers from YTA versus anthers from HL-6. Based on the two analyses, a total of 97 unique proteins were identified and quantified in uninucleate stage rice anther under the error rate of less than 10%, of which eight proteins showed abundance changes of at least twofold between YTA and YTB. Triosephosphate isomerase, fructokinase II, DNA-binding protein GBP16 and ribosomal protein L3B were over-expressed in YTB, while oligopeptide transporter, floral organ regulator 1, kinase and S-adenosyl-L: -methionine synthetase were over-expressed in YTA. Reduction of the proteins associated with energy production and lesser ATP equivalents detected in CMS anther indicated that the low level of energy production played an important role in inducing CMS-HL.

  15. CMS Wallet Card

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Wallet Card is a quick reference statistical summary on annual CMS program and financial data. The CMS Wallet Card is available for each year from 2004...

  16. CMS distributed data analysis with CRAB3

    Science.gov (United States)

    Mascheroni, M.; Balcas, J.; Belforte, S.; Bockelman, B. P.; Hernandez, J. M.; Ciangottini, D.; Konstantinov, P. B.; Silva, J. M. D.; Ali, M. A. B. M.; Melo, A. M.; Riahi, H.; Tanasijczuk, A. J.; Yusli, M. N. B.; Wolf, M.; Woodard, A. E.; Vaandering, E.

    2015-12-01

    The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day. CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks and submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. The new system improves the robustness, scalability and sustainability of the service. Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.

  17. CMS tracker visualization tools

    Energy Technology Data Exchange (ETDEWEB)

    Mennea, M.S. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy); Osborne, I. [Northeastern University, 360 Huntington Avenue, Boston, MA 02115 (United States); Regano, A. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy); Zito, G. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy)]. E-mail: giuseppe.zito@ba.infn.it

    2005-08-21

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking.

  18. CMS tracker visualization tools

    CERN Document Server

    Zito, G; Osborne, I; Regano, A

    2005-01-01

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking.

  19. CMS tracker visualization tools

    International Nuclear Information System (INIS)

    Mennea, M.S.; Osborne, I.; Regano, A.; Zito, G.

    2005-01-01

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking

  20. 42 CFR 460.20 - Notice of CMS determination.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Notice of CMS determination. 460.20 Section 460.20... ELDERLY (PACE) PACE Organization Application and Waiver Process § 460.20 Notice of CMS determination. (a... application to CMS, CMS takes one of the following actions: (1) Approves the application. (2) Denies the...

  1. CMS Young Researchers Award 2013 and Fundamental Physics Scholars Award from the CMS Experiment

    CERN Multimedia

    Lapka, Marzena

    2014-01-01

    Photo 2: CMS Fundamental Physics Scholars (FPSs) 1st prize: Joosep Pata, from Estonian National Institue of Chemical Physics and Biophysics / Photo 1 and 3: CMS Young Researchers Award. From left to right: Guido Tonelli, Colin Bernet, Andre David, Oliver Gutsche, Dmytro Kovalskyi, Andrea Petrucci, Joe Incandela and Jim Virdee

  2. CMS computing on grid

    International Nuclear Information System (INIS)

    Guan Wen; Sun Gongxing

    2007-01-01

    CMS has adopted a distributed system of services which implement CMS application view on top of Grid services. An overview of CMS services will be covered. Emphasis is on CMS data management and workload Management. (authors)

  3. Tiers of intervention in kindergarten through third grade.

    Science.gov (United States)

    O'Connor, Rollanda E; Harty, Kristin R; Fulmer, Deborah

    2005-01-01

    This study measured the effects of increasing levels of intervention in reading for a cohort of children in Grades K through 3 to determine whether the severity of reading disability (RD) could be significantly reduced in the catchment schools. Tier 1 consisted of professional development for teachers of reading. The focus of this study is on additional instruction that was provided as early as kindergarten for children whose achievement fell below average. Tier 2 intervention consisted of small-group reading instruction 3 times per week, and Tier 3 of daily instruction delivered individually or in groups of two. A comparison of the reading achievement of third-grade children who were at risk in kindergarten showed moderate to large differences favoring children in the tiered interventions in decoding, word identification, fluency, and reading comprehension.

  4. A Distributed Tier-1

    DEFF Research Database (Denmark)

    Fischer, Lars; Grønager, Michael; Kleist, Josva

    2008-01-01

    The Tier-1 facility operated by the Nordic DataGrid Facility (NDGF) differs significantly from other Tier-1s in several aspects: firstly, it is not located at one or a few premises, but instead is distributed throughout the Nordic countries; secondly, it is not under the governance of a single...... organization but instead is a meta-center built of resources under the control of a number of different national organizations. We present some technical implications of these aspects as well as the high-level design of this distributed Tier-1. The focus will be on computing services, storage and monitoring....

  5. A distributed Tier-1

    Science.gov (United States)

    Fischer, L.; Grønager, M.; Kleist, J.; Smirnova, O.

    2008-07-01

    The Tier-1 facility operated by the Nordic DataGrid Facility (NDGF) differs significantly from other Tier-1s in several aspects: firstly, it is not located at one or a few premises, but instead is distributed throughout the Nordic countries; secondly, it is not under the governance of a single organization but instead is a meta-center built of resources under the control of a number of different national organizations. We present some technical implications of these aspects as well as the high-level design of this distributed Tier-1. The focus will be on computing services, storage and monitoring.

  6. Towards more stable operation of the Tokyo Tier2 center

    Science.gov (United States)

    Nakamura, T.; Mashimo, T.; Matsui, N.; Sakamoto, H.; Ueda, I.

    2014-06-01

    The Tokyo Tier2 center, which is located at the International Center for Elementary Particle Physics (ICEPP) in the University of Tokyo, was established as a regional analysis center in Japan for the ATLAS experiment. The official operation with WLCG was started in 2007 after the several years development since 2002. In December 2012, we have replaced almost all hardware as the third system upgrade to deal with analysis for further growing data of the ATLAS experiment. The number of CPU cores are increased by factor of two (9984 cores in total), and the performance of individual CPU core is improved by 20% according to the HEPSPEC06 benchmark test at 32bit compile mode. The score is estimated as 18.03 (SL6) per core by using Intel Xeon E5-2680 2.70 GHz. Since all worker nodes are made by 16 CPU cores configuration, we deployed 624 blade servers in total. They are connected to 6.7 PB of disk storage system with non-blocking 10 Gbps internal network backbone by using two center network switches (NetIron MLXe-32). The disk storage is made by 102 of RAID6 disk arrays (Infortrend DS S24F-G2840-4C16DO0) and served by equivalent number of 1U file servers with 8G-FC connection to maximize the file transfer throughput per storage capacity. As of February 2013, 2560 CPU cores and 2.00 PB of disk storage have already been deployed for WLCG. Currently, the remaining non-grid resources for both CPUs and disk storage are used as dedicated resources for the data analysis by the ATLAS Japan collaborators. Since all hardware in the non-grid resources are made by same architecture with Tier2 resource, they will be able to be migrated as the Tier2 extra resource on demand of the ATLAS experiment in the future. In addition to the upgrade of computing resources, we expect the improvement of connectivity on the wide area network. Thanks to the Japanese NREN (NII), another 10 Gbps trans-Pacific line from Japan to Washington will be available additionally with existing two 10 Gbps lines

  7. Towards more stable operation of the Tokyo Tier2 center

    International Nuclear Information System (INIS)

    Nakamura, T; Mashimo, T; Matsui, N; Sakamoto, H; Ueda, I

    2014-01-01

    The Tokyo Tier2 center, which is located at the International Center for Elementary Particle Physics (ICEPP) in the University of Tokyo, was established as a regional analysis center in Japan for the ATLAS experiment. The official operation with WLCG was started in 2007 after the several years development since 2002. In December 2012, we have replaced almost all hardware as the third system upgrade to deal with analysis for further growing data of the ATLAS experiment. The number of CPU cores are increased by factor of two (9984 cores in total), and the performance of individual CPU core is improved by 20% according to the HEPSPEC06 benchmark test at 32bit compile mode. The score is estimated as 18.03 (SL6) per core by using Intel Xeon E5-2680 2.70 GHz. Since all worker nodes are made by 16 CPU cores configuration, we deployed 624 blade servers in total. They are connected to 6.7 PB of disk storage system with non-blocking 10 Gbps internal network backbone by using two center network switches (NetIron MLXe-32). The disk storage is made by 102 of RAID6 disk arrays (Infortrend DS S24F-G2840-4C16DO0) and served by equivalent number of 1U file servers with 8G-FC connection to maximize the file transfer throughput per storage capacity. As of February 2013, 2560 CPU cores and 2.00 PB of disk storage have already been deployed for WLCG. Currently, the remaining non-grid resources for both CPUs and disk storage are used as dedicated resources for the data analysis by the ATLAS Japan collaborators. Since all hardware in the non-grid resources are made by same architecture with Tier2 resource, they will be able to be migrated as the Tier2 extra resource on demand of the ATLAS experiment in the future. In addition to the upgrade of computing resources, we expect the improvement of connectivity on the wide area network. Thanks to the Japanese NREN (NII), another 10 Gbps trans-Pacific line from Japan to Washington will be available additionally with existing two 10 Gbps lines

  8. Testing the Efficacy of a Tier 2 Mathematics Intervention: A Conceptual Replication Study

    Science.gov (United States)

    Doabler, Christian T.; Clarke, Ben; Kosty, Derek B.; Kurtz-Nelson, Evangeline; Fien, Hank; Smolkowski, Keith; Baker, Scott K.

    2016-01-01

    The purpose of this closely aligned conceptual replication study was to investigate the efficacy of a Tier 2 kindergarten mathematics intervention. The replication study differed from the initial randomized controlled trial on three important elements: geographical region, timing of the intervention, and instructional context of the…

  9. Explicit Instructional Interactions: Exploring the Black Box of a Tier 2 Mathematics Intervention

    Science.gov (United States)

    Doabler, Christian T.; Clarke, Ben; Stoolmiller, Mike; Kosty, Derek B.; Fien, Hank; Smolkowski, Keith; Baker, Scott K.

    2017-01-01

    A critical aspect of intervention research is investigating the active ingredients that underlie intensive interventions and their theories of change. This study explored the rate of instructional interactions within treatment groups to determine whether they offered explanatory power of an empirically validated Tier 2 kindergarten mathematics…

  10. The Use of Proxy Caches for File Access in a Multi-Tier Grid Environment

    International Nuclear Information System (INIS)

    Brun, R; Duellmann, D; Ganis, G; Janyst, L; Peters, A J; Rademakers, F; Sindrilaru, E; Hanushevsky, A

    2011-01-01

    The use of proxy caches has been extensively studied in the HEP environment for efficient access of database data and showed significant performance with only very moderate operational effort at higher grid tiers (T2, T3). In this contribution we propose to apply the same concept to the area of file access and analyse the possible performance gains, operational impact on site services and applicability to different HEP use cases. Base on a proof-of-concept studies with a modified XROOT proxy server we review the cache efficiency and overheads for access patterns of typical ROOT based analysis programs. We conclude with a discussion of the potential role of this new component at the different tiers of a distributed computing grid.

  11. CMS Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Center for Strategic Planning produces an annual CMS Statistics reference booklet that provides a quick reference for summary information about health...

  12. CMS offline web tools

    International Nuclear Information System (INIS)

    Metson, S; Newbold, D; Belforte, S; Kavka, C; Bockelman, B; Dziedziniewicz, K; Egeland, R; Elmer, P; Eulisse, G; Tuura, L; Evans, D; Fanfani, A; Feichtinger, D; Kuznetsov, V; Lingen, F van; Wakefield, S

    2008-01-01

    We describe a relatively new effort within CMS to converge on a set of web based tools, using state of the art industry techniques, to engage with the CMS offline computing system. CMS collaborators require tools to monitor various components of the computing system and interact with the system itself. The current state of the various CMS web tools is described along side current planned developments. The CMS collaboration comprises of nearly 3000 people from all over the world. As well as its collaborators, its computing resources are spread all over globe and are accessed via the LHC grid to run analysis, large scale production and data transfer tasks. Due to the distributed nature of collaborators effective provision of collaborative tools is essential to maximise physics exploitation of the CMS experiment, especially when the size of the CMS data set is considered. CMS has chosen to provide such tools over the world wide web as a top level service, enabling all members of the collaboration to interact with the various offline computing components. Traditionally web interfaces have been added in HEP experiments as an afterthought. In the CMS offline we have decided to put web interfaces, and the development of a common CMS web framework, on an equal footing with the rest of the offline development. Tools exist within CMS to transfer and catalogue data (PhEDEx and DBS/DLS), run Monte Carlo production (ProdAgent) and submit analysis (CRAB). Effective human interfaces to these systems are required for users with different agendas and practical knowledge of the systems to effectively use the CMS computing system. The CMS web tools project aims to provide a consistent interface to all these tools

  13. CMS offline web tools

    Energy Technology Data Exchange (ETDEWEB)

    Metson, S; Newbold, D [H.H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Belforte, S; Kavka, C [INFN, Sezione di Trieste (Italy); Bockelman, B [University of Nebraska Lincoln, Lincoln, NE (United States); Dziedziniewicz, K [CERN, Geneva (Switzerland); Egeland, R [University of Minnesota Twin Cities, Minneapolis, MN (United States); Elmer, P [Princeton (United States); Eulisse, G; Tuura, L [Northeastern University, Boston, MA (United States); Evans, D [Fermilab MS234, Batavia, IL (United States); Fanfani, A [Universita degli Studi di Bologna (Italy); Feichtinger, D [PSI, Villigen (Switzerland); Kuznetsov, V [Cornell University, Ithaca, NY (United States); Lingen, F van [California Institute of Technology, Pasedena, CA (United States); Wakefield, S [Blackett Laboratory, Imperial College, London (United Kingdom)

    2008-07-15

    We describe a relatively new effort within CMS to converge on a set of web based tools, using state of the art industry techniques, to engage with the CMS offline computing system. CMS collaborators require tools to monitor various components of the computing system and interact with the system itself. The current state of the various CMS web tools is described along side current planned developments. The CMS collaboration comprises of nearly 3000 people from all over the world. As well as its collaborators, its computing resources are spread all over globe and are accessed via the LHC grid to run analysis, large scale production and data transfer tasks. Due to the distributed nature of collaborators effective provision of collaborative tools is essential to maximise physics exploitation of the CMS experiment, especially when the size of the CMS data set is considered. CMS has chosen to provide such tools over the world wide web as a top level service, enabling all members of the collaboration to interact with the various offline computing components. Traditionally web interfaces have been added in HEP experiments as an afterthought. In the CMS offline we have decided to put web interfaces, and the development of a common CMS web framework, on an equal footing with the rest of the offline development. Tools exist within CMS to transfer and catalogue data (PhEDEx and DBS/DLS), run Monte Carlo production (ProdAgent) and submit analysis (CRAB). Effective human interfaces to these systems are required for users with different agendas and practical knowledge of the systems to effectively use the CMS computing system. The CMS web tools project aims to provide a consistent interface to all these tools.

  14. Lowering of the YE+2 end-cap for CMS

    CERN Multimedia

    Maximilien Brice

    2006-01-01

    Lowering of the second end-cap disk of CMS, designated YE+2, took place on 12 December 2006. The huge disk, 15 m high and weighing around 900 tonnes, is equipped on both sides with muon detectors. The lowering operation started at around 7am and finished about 10 hours later with the arrival of the disk into the cavern 100 m below the surface hall.

  15. New CMS detectors under construction at CERN

    CERN Multimedia

    Katarina Anthony

    2012-01-01

    While the LHC will play the starring role in the 2013/2014 Long Shutdown (LS1), the break will also be a chance for its experiments to upgrade their detectors. CMS will be expanding its current muon detection systems, fitting 72 new cathode strip chambers (CSC) and 144 new resistive plate chambers (RPC) to the endcaps of the detector. These new chambers are currently under construction in Building 904.   CMS engineers install side panels on a CSC detector in Building 904. "The original RPC and CSC detectors were constructed in bits and pieces around the world," says Armando Lanaro, CSC construction co-ordinator. "But for the construction of these additional chambers, we decided to unify the assembly and testing into a single facility at CERN. There, CMS technicians, engineers and physicists are taking raw materials and transforming them into installation-ready detectors.” This new facility can be found in Building 904. Once the assembly site for the strai...

  16. 25 CFR 542.20 - What is a Tier A gaming operation?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false What is a Tier A gaming operation? 542.20 Section 542.20 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.20 What is a Tier A gaming operation? A Tier A gaming operation is one with annual...

  17. 25 CFR 542.30 - What is a Tier B gaming operation?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false What is a Tier B gaming operation? 542.30 Section 542.30 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.30 What is a Tier B gaming operation? A Tier B gaming operation is one with gross...

  18. 25 CFR 542.40 - What is a Tier C gaming operation?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false What is a Tier C gaming operation? 542.40 Section 542.40 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.40 What is a Tier C gaming operation? A Tier C gaming operation is one with annual...

  19. The challenge of modelling nitrogen management at the field scale: simulation and sensitivity analysis of N2O fluxes across nine experimental sites using DailyDayCent

    International Nuclear Information System (INIS)

    Fitton, N; Datta, A; Hastings, A; Kuhnert, M; Smith, P; Topp, C F E; Cloy, J M; Rees, R M; Cardenas, L M; Williams, J R; Smith, K; Chadwick, D

    2014-01-01

    The United Kingdom currently reports nitrous oxide emissions from agriculture using the IPCC default Tier 1 methodology. However Tier 1 estimates have a large degree of uncertainty as they do not account for spatial variations in emissions. Therefore biogeochemical models such as DailyDayCent (DDC) are increasingly being used to provide a spatially disaggregated assessment of annual emissions. Prior to use, an assessment of the ability of the model to predict annual emissions should be undertaken, coupled with an analysis of how model inputs influence model outputs, and whether the modelled estimates are more robust that those derived from the Tier 1 methodology. The aims of the study were (a) to evaluate if the DailyDayCent model can accurately estimate annual N 2 O emissions across nine different experimental sites, (b) to examine its sensitivity to different soil and climate inputs across a number of experimental sites and (c) to examine the influence of uncertainty in the measured inputs on modelled N 2 O emissions. DailyDayCent performed well across the range of cropland and grassland sites, particularly for fertilized fields indicating that it is robust for UK conditions. The sensitivity of the model varied across the sites and also between fertilizer/manure treatments. Overall our results showed that there was a stronger correlation between the sensitivity of N 2 O emissions to changes in soil pH and clay content than the remaining input parameters used in this study. The lower the initial site values for soil pH and clay content, the more sensitive DDC was to changes from their initial value. When we compared modelled estimates with Tier 1 estimates for each site, we found that DailyDayCent provided a more accurate representation of the rate of annual emissions. (paper)

  20. Enabling opportunistic resources for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Hufnagel, Dick [Fermilab

    2015-11-19

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resources — resources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  1. Search for supersymmetry with jets, missing transverse momentum, and a single tau at CMS

    International Nuclear Information System (INIS)

    Nowak, Friederike

    2012-06-01

    This thesis presents a search for physics beyond the Standard Model with jets, missing transverse momentum, and a single tau. It aims especially at a cosmological favored region in Supersymmetry, the stau-LSP co-annihilation region with an enhanced production of taus. It is performed with data taken 2011 by the CMS experiment at the LHC, corresponding to an integrated luminosity of 5 fb -1 . The background was divided in two different contributions, one with real and one with fake taus. Both estimates were derived with data-driven techniques. The final measurement yields 28 events, while the number of background events was predicted to be 28.5 ± 2.6 (stat) ± 2.4 (syst), and thus, no deviation from the Standard Model could be found. As a result, exclusion limits on the supersymmetric cMSSM have been calculated. Data taking and distribution imposes a challenge to the computing grid. To monitor the stability of the infrastructure, modules for Tier 2 operations within the HappyFace Project have been developed, tested, and taken into usage.

  2. Search for supersymmetry with jets, missing transverse momentum, and a single tau at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Nowak, Friederike

    2012-06-15

    This thesis presents a search for physics beyond the Standard Model with jets, missing transverse momentum, and a single tau. It aims especially at a cosmological favored region in Supersymmetry, the stau-LSP co-annihilation region with an enhanced production of taus. It is performed with data taken 2011 by the CMS experiment at the LHC, corresponding to an integrated luminosity of 5 fb{sup -1}. The background was divided in two different contributions, one with real and one with fake taus. Both estimates were derived with data-driven techniques. The final measurement yields 28 events, while the number of background events was predicted to be 28.5 {+-} 2.6 (stat) {+-} 2.4 (syst), and thus, no deviation from the Standard Model could be found. As a result, exclusion limits on the supersymmetric cMSSM have been calculated. Data taking and distribution imposes a challenge to the computing grid. To monitor the stability of the infrastructure, modules for Tier 2 operations within the HappyFace Project have been developed, tested, and taken into usage.

  3. Quality Assurance Tests of the CMS Endcap RPCs

    CERN Document Server

    Ahmed, Ijaz; Hamid Ansari, M; Irfan Asghar, M; Asghar, Sajjad; Awan, Irfan Ullah; Butt, Jamila; Hoorani, Hafeez R; Hussain, Ishtiaq; Khurshid, Taimoor; Muhammad, Saleh; Shahzad, Hassan; Aftab, Zia; Iftikhar, Mian; Khan, Mohammad Khalid; Saleh, M

    2008-01-01

    In this note, we have described the quality assurance tests performed for endcap Resistive Plate Chambers (RPCs) at two different sites, Pakistan Atomic Energy Commission (PAEC) and National Centre for Physics (NCP), in Pakistan. This paper describes various quality assurance tests both at the level of gas gaps and the chambers. The data has been obtained at different time windows during the large scale production of CMS RPCs of RE2/2 and RE2/3 type. In the quality assurance tests, we have investigated parameters like dark current, strip occupancy, cluster size and efficiency of RPCs.

  4. 42 CFR 403.248 - Administrative review of CMS determinations.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Administrative review of CMS determinations. 403... Certification Program: General Provisions § 403.248 Administrative review of CMS determinations. (a) This section provides for administrative review if CMS determines— (1) Not to certify a policy; or (2) That a...

  5. 40 CFR 141.203 - Tier 2 Public Notice-Form, manner, and frequency of notice.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Tier 2 Public Notice-Form, manner, and frequency of notice. 141.203 Section 141.203 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...., house renters, apartment dwellers, university students, nursing home patients, prison inmates, etc...

  6. Effects of a Tier 3 Self-Management Intervention Implemented with and without Treatment Integrity

    Science.gov (United States)

    Lower, Ashley; Young, K. Richard; Christensen, Lynnette; Caldarella, Paul; Williams, Leslie; Wills, Howard

    2016-01-01

    This study investigated the effects of a Tier 3 peer-matching self-management intervention on two elementary school students who had previously been less responsive to Tier 1 and Tier 2 interventions. The Tier 3 self-management intervention, which was implemented in the general education classrooms, included daily electronic communication between…

  7. The Two-Tier Fecal Occult Blood Test: Cost-Effective Screening

    Directory of Open Access Journals (Sweden)

    Andrew J Rae

    1994-01-01

    Full Text Available The two-tier test represents a strategy combining HO Sensa and Hemeselect fecal occult blood tests (FOBTs with the aim of greater specificity and consequent economic advantages. If patients register a positive result on any HO Sensa guaiac test, they are once again tested by a hemoglobin-specific Hemeselect test. This concept was applied to a multicentre study involving persons 40 years or older. One component of the study enrolled 573 high risk patients while the second arm recruited an additional 1301 patients (52% asymptomatic/48% symptomatic stratified according to personal history and symptoms. The two-tier test produced fewer false positives than traditional tests in both groups evaluated in the study. In the high risk group, specificity (88.7% for two-tier versus 80.6% for Hemoccult and 69.5% for HO Sensa was higher and false positive rates were lower (11.3% for two-tier versus 19.5% for Hemoccultand 30.5% for HO Sensa for the two-tier test versus Hemoccult and HO Sensa FOBTs (95% CI for all colorectal cancers [CRCs] and polyps greater than 1 cm, α=0.05 . No significant differences in sensitivity were observed between tests in the same group. Also, in the high risk group, benefits of the two-tier test outweighed the costs. Due to the small number of cancers and polyps in the second arm of the study, presentation of data is meant to be descriptive and representative of trends in a ‘normal’ population. Nevertheless, specificity of the two-tier test was higher (96.8% for two-tier versus 87.2% for Hemoccult and 69.5% for HO Sensa and false positive rate lower (3.2% for two-tier versus 12.8% for Hemoccult and 22.3% for HO Sensa than either the Hemoccult or HO Sensa FOBT (95% CI for all CRCs and polyps greater than 1 cm. This initial study, focusing on the cost-benefit relationship of increased specificity, represents a new way of economically evaluating existing FOBTs.

  8. Mixed Matrix Carbon Molecular Sieve and Alumina (CMS-Al2O3) Membranes.

    Science.gov (United States)

    Song, Yingjun; Wang, David K; Birkett, Greg; Martens, Wayde; Duke, Mikel C; Smart, Simon; Diniz da Costa, João C

    2016-07-29

    This work shows mixed matrix inorganic membranes prepared by the vacuum-assisted impregnation method, where phenolic resin precursors filled the pore of α-alumina substrates. Upon carbonisation, the phenolic resin decomposed into several fragments derived from the backbone of the resin matrix. The final stages of decomposition (>650 °C) led to a formation of carbon molecular sieve (CMS) structures, reaching the lowest average pore sizes of ~5 Å at carbonisation temperatures of 700 °C. The combination of vacuum-assisted impregnation and carbonisation led to the formation of mixed matrix of CMS and α-alumina particles (CMS-Al2O3) in a single membrane. These membranes were tested for pervaporative desalination and gave very high water fluxes of up to 25 kg m(-2) h(-1) for seawater (NaCl 3.5 wt%) at 75 °C. Salt rejection was also very high varying between 93-99% depending on temperature and feed salt concentration. Interestingly, the water fluxes remained almost constant and were not affected as feed salt concentration increased from 0.3, 1 and 3.5 wt%.

  9. The CMS electromagnetic calorimeter and the search for the Higgs boson in the decay channel H {yields} WW{sup *} {yields} 2e2{nu}; Le calorimetre electromagnetique de CMS et la recherche du boson de Higgs dans le canal de desintegration H {yields} WW{sup *} {yields} 2e2{nu}

    Energy Technology Data Exchange (ETDEWEB)

    Rovelli, I.Ch

    2006-01-15

    CMS is one of the four experiments that will take data at the LHC. Large part of my work was devoted to the development of electron reconstruction tools aimed at improving the Higgs boson discovery potential in the H {yields} WW{sup *} {yields} 2e2{nu} channel. A major role in the electron reconstruction is played by the electromagnetic calorimeter ECAL, an homogeneous calorimeter made of scintillating PbWO{sub 4} crystals. The first 3 chapters give an overview of LHC and CMS.In chapter 4 the analysis of the data collected during the 2003 electromagnetic calorimeter test beam is presented. First the problem of the intercalibration at the test beam is addressed. This is a major task, since the precision of the intercalibration directly affects the constant term of the energy resolution, for which the CMS goal is to reach a precision better than 0.5%. The good initial intercalibration, anyway, could be spoiled during the data taking by the effects of the radiation on the crystals, which can change the relative responses of the channels. A monitoring laser system is foreseen at CMS. The possibility to check the calibration stability and to correct the changes in the response with a precision within the required limits is demonstrated. Chapter five describes the electron reconstruction and identification in CMS. A crucial problem for the electron reconstruction is represented by the Bremsstrahlung emission in the tracker. A tracking procedure dealing with the Bremsstrahlung energy loss is discussed. Together with an improvement in the reconstruction efficiency, the procedure allows to identify electrons with a small fraction of radiated energy, which can be usefully exploited for the ECAL calibration. The developed algorithms are applied in chapter 6, which presents the study of the CMS discovery potential of the Higgs boson in the H {yields} WW{sup *} {yields} 2e2{nu} channel. This is the discovery channel in the range of masses between 2m{sub W} and 2m{sub Z}. Here

  10. 42 CFR 405.1834 - CMS reviewing official procedure.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false CMS reviewing official procedure. 405.1834 Section... Determinations and Appeals § 405.1834 CMS reviewing official procedure. (a) Scope. A provider that is a party to... Administrator by a designated CMS reviewing official who considers whether the decision of the intermediary...

  11. CMS (Compact Muon Solenoid)

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The milestone workshops on LHC experiments in Aachen in 1990 and at Evian in 1992 provided the first sketches of how LHC detectors might look. The concept of a compact general-purpose LHC experiment based on a solenoid to provide the magnetic field was first discussed at Aachen, and the formal Expression of Interest was aired at Evian. It was here that the Compact Muon Solenoid (CMS) name first became public. Optimizing first the muon detection system is a natural starting point for a high luminosity (interaction rate) proton-proton collider experiment. The compact CMS design called for a strong magnetic field, of some 4 Tesla, using a superconducting solenoid, originally about 14 metres long and 6 metres bore. (By LHC standards, this warrants the adjective 'compact'.) The main design goals of CMS are: 1 - a very good muon system providing many possibilities for momentum measurement (physicists call this a 'highly redundant' system); 2 - the best possible electromagnetic calorimeter consistent with the above; 3 - high quality central tracking to achieve both the above; and 4 - an affordable detector. Overall, CMS aims to detect cleanly the diverse signatures of new physics by identifying and precisely measuring muons, electrons and photons over a large energy range at very high collision rates, while also exploiting the lower luminosity initial running. As well as proton-proton collisions, CMS will also be able to look at the muons emerging from LHC heavy ion beam collisions. The Evian CMS conceptual design foresaw the full calorimetry inside the solenoid, with emphasis on precision electromagnetic calorimetry for picking up photons. (A light Higgs particle will probably be seen via its decay into photon pairs.) The muon system now foresaw four stations. Inner tracking would use silicon microstrips and microstrip gas chambers, with over 10 7 channels offering high track finding efficiency. In the central CMS barrel, the tracking elements are

  12. CERN Open Days 2013, Point 5 - CMS: CMS Experiment

    CERN Multimedia

    CERN Photolab

    2013-01-01

    Stand description: Come to LHC's Point 5 and visit the Compact Muon Solenoid (CMS) experiment that discovered the Higgs boson ! Descend 100 metres underground and take a walk in the cathedral-sized cavern housing the 14,000-tonne CMS detector. Ask Higgs hunters and other scientists just about anything, be it questions about their work, particle physics or the engineering challenges of building CMS.  On surface no restricted access  Point 5 will be abuzz all day long with activities for all ages, including literally "cool" cryogenics shows featuring the world's fastest ice-cream maker, dance performances, and much more.

  13. CMS Thesis Award

    CERN Multimedia

    2004-01-01

    The 2003 CMS thesis award was presented to Riccardo Ranieri on 15 March for his Ph.D. thesis "Trigger Selection of WH → μ ν b bbar with CMS" where 'WH → μ ν b bbar' represents the associated production of the W boson and the Higgs boson and their subsequent decays. Riccardo received his Ph.D. from the University of Florence and was supervised by Carlo Civinini. In total nine thesis were nominated for the award, which was judged on originality, impact within the field of high energy physics, impact within CMS and clarity of writing. Gregory Snow, secretary of the awarding committee, explains why Riccardo's thesis was chosen, ‘‘The search for the Higgs boson is one of the main physics goals of CMS. Riccardo's thesis helps the experiment to formulate the strategy which will be used in that search.'' Lorenzo Foà, Chairperson of the CMS Collaboration Board, presented Riccardo with an commemorative engraved plaque. He will also receive the opportunity to...

  14. Dose-ranging pharmacokinetics of colistin methanesulphonate (CMS) and colistin in rats following single intravenous CMS doses.

    Science.gov (United States)

    Marchand, Sandrine; Lamarche, Isabelle; Gobin, Patrice; Couet, William

    2010-08-01

    The aim of this study was to evaluate the effect of colistin methanesulphonate (CMS) dose on CMS and colistin pharmacokinetics in rats. Three rats per group received an intravenous bolus of CMS at a dose of 5, 15, 30, 60 or 120 mg/kg. Arterial blood samples were drawn at 0, 5, 15, 30, 60, 90, 120, 150 and 180 min. CMS and colistin plasma concentrations were determined by liquid chromatography-tandem mass spectrometry (LC-MS/MS). The pharmacokinetic parameters of CMS and colistin were calculated by non-compartmental analysis. Linear relationships were observed between CMS and colistin AUCs to infinity and CMS doses, as well as between CMS and colistin C(max) and CMS doses. CMS and colistin pharmacokinetics were linear for a range of colistin concentrations covering the range of values encountered and recommended in patients even during treatment with higher doses.

  15. Tiered Approach to Resilience Assessment.

    Science.gov (United States)

    Linkov, Igor; Fox-Lent, Cate; Read, Laura; Allen, Craig R; Arnott, James C; Bellini, Emanuele; Coaffee, Jon; Florin, Marie-Valentine; Hatfield, Kirk; Hyde, Iain; Hynes, William; Jovanovic, Aleksandar; Kasperson, Roger; Katzenberger, John; Keys, Patrick W; Lambert, James H; Moss, Richard; Murdoch, Peter S; Palma-Oliveira, Jose; Pulwarty, Roger S; Sands, Dale; Thomas, Edward A; Tye, Mari R; Woods, David

    2018-04-25

    Regulatory agencies have long adopted a three-tier framework for risk assessment. We build on this structure to propose a tiered approach for resilience assessment that can be integrated into the existing regulatory processes. Comprehensive approaches to assessing resilience at appropriate and operational scales, reconciling analytical complexity as needed with stakeholder needs and resources available, and ultimately creating actionable recommendations to enhance resilience are still lacking. Our proposed framework consists of tiers by which analysts can select resilience assessment and decision support tools to inform associated management actions relative to the scope and urgency of the risk and the capacity of resource managers to improve system resilience. The resilience management framework proposed is not intended to supplant either risk management or the many existing efforts of resilience quantification method development, but instead provide a guide to selecting tools that are appropriate for the given analytic need. The goal of this tiered approach is to intentionally parallel the tiered approach used in regulatory contexts so that resilience assessment might be more easily and quickly integrated into existing structures and with existing policies. Published 2018. This article is a U.S. government work and is in the public domain in the USA.

  16. Estimating Implementation and Operational Costs of an Integrated Tiered CD4 Service including Laboratory and Point of Care Testing in a Remote Health District in South Africa

    Science.gov (United States)

    Cassim, Naseem; Coetzee, Lindi M.; Schnippel, Kathryn; Glencross, Deborah K.

    2014-01-01

    Background An integrated tiered service delivery model (ITSDM) has been proposed to provide ‘full-coverage’ of CD4 services throughout South Africa. Five tiers are described, defined by testing volumes and number of referring health-facilities. These include: (1) Tier-1/decentralized point-of-care service (POC) in a single site; Tier-2/POC-hub servicing processing 600 samples/day and serving >100 or >200 health-clinics, respectively. The objective of this study was to establish costs of existing and ITSDM-tiers 1, 2 and 3 in a remote, under-serviced district in South Africa. Methods Historical health-facility workload volumes from the Pixley-ka-Seme district, and the total volumes of CD4 tests performed by the adjacent district referral CD4 laboratories, linked to locations of all referring clinics and related laboratory-to-result turn-around time (LTR-TAT) data, were extracted from the NHLS Corporate-Data-Warehouse for the period April-2012 to March-2013. Tiers were costed separately (as a cost-per-result) including equipment, staffing, reagents and test consumable costs. A one-way sensitivity analyses provided for changes in reagent price, test volumes and personnel time. Results The lowest cost-per-result was noted for the existing laboratory-based Tiers- 4 and 5 ($6.24 and $5.37 respectively), but with related increased LTR-TAT of >24–48 hours. Full service coverage with TAT cost-per-result of $32.32 and $15.88 respectively. A single district Tier-3 laboratory also ensured ‘full service coverage’ and Implementing a single Tier-3/community laboratory to extend and improve delivery of services in Pixley-ka-Seme, with an estimated local ∼12–24-hour LTR-TAT, is ∼$2 more than existing referred services per-test, but 2–4 fold cheaper than implementing eight Tier-2/POC-hubs or providing twenty-seven Tier-1/POCT CD4 services. PMID:25517412

  17. CP violation in CMS expected performance

    CERN Document Server

    Stefanescu, J

    1999-01-01

    The CMS experiment can contribute significantly to the measurement of the CP violation asymmetries. A recent evaluation of the expected precision on the CP violation parameter sin 2 beta in the channel B /sub d//sup 0/ to J/ psi $9 K/sub s//sup 0/ has been performed using a simulation of the CMS tracker including full pattern recognition. CMS has also studied the possibility to observe CP violation in the decay channel B/sub s//sup 0/ to J/ psi phi . The $9 results of these studies are reviewed. (7 refs).

  18. The CMS electromagnetic calorimeter and the search for the Higgs boson in the decay channel H → WW* → 2e2ν

    International Nuclear Information System (INIS)

    Rovelli, I.Ch.

    2006-01-01

    CMS is one of the four experiments that will take data at the LHC. Large part of my work was devoted to the development of electron reconstruction tools aimed at improving the Higgs boson discovery potential in the H → WW * → 2e2ν channel. A major role in the electron reconstruction is played by the electromagnetic calorimeter ECAL, an homogeneous calorimeter made of scintillating PbWO 4 crystals. The first 3 chapters give an overview of LHC and CMS.In chapter 4 the analysis of the data collected during the 2003 electromagnetic calorimeter test beam is presented. First the problem of the intercalibration at the test beam is addressed. This is a major task, since the precision of the intercalibration directly affects the constant term of the energy resolution, for which the CMS goal is to reach a precision better than 0.5%. The good initial intercalibration, anyway, could be spoiled during the data taking by the effects of the radiation on the crystals, which can change the relative responses of the channels. A monitoring laser system is foreseen at CMS. The possibility to check the calibration stability and to correct the changes in the response with a precision within the required limits is demonstrated. Chapter five describes the electron reconstruction and identification in CMS. A crucial problem for the electron reconstruction is represented by the Bremsstrahlung emission in the tracker. A tracking procedure dealing with the Bremsstrahlung energy loss is discussed. Together with an improvement in the reconstruction efficiency, the procedure allows to identify electrons with a small fraction of radiated energy, which can be usefully exploited for the ECAL calibration. The developed algorithms are applied in chapter 6, which presents the study of the CMS discovery potential of the Higgs boson in the H → WW * → 2e2ν channel. This is the discovery channel in the range of masses between 2m W and 2m Z . Here the possibility to extend the study also to the

  19. The CMS trigger in Run 2

    CERN Document Server

    Tosi, Mia

    2018-01-01

    During its second period of operation (Run 2) which started in 2015, the LHC will reach a peak instantaneous luminosity of approximately 2$\\times 10^{34}$~cm$^{-2}s^{-1}$ with an average pile-up of about 55, far larger than the design value. Under these conditions, the online event selection is a very challenging task. In CMS, it is realised by a two-level trigger system: the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm.\\\\ In order to face this challenge, the L1 trigger has undergone a major upgrade compared to Run 1, whereby all electronic boards of the system have been replaced, allowing more sophisticated algorithms to be run online. Its last stage, the global trigger, is now able to perform complex selections and to compute high-level quantities, like invariant masses. Likewise, the algorithms that run in the HLT went through big improvements; in particular, new ap...

  20. Genome-wide comparative transcriptome analysis of CMS-D2 and its maintainer and restorer lines in upland cotton.

    Science.gov (United States)

    Wu, Jianyong; Zhang, Meng; Zhang, Bingbing; Zhang, Xuexian; Guo, Liping; Qi, Tingxiang; Wang, Hailin; Zhang, Jinfa; Xing, Chaozhu

    2017-06-08

    Cytoplasmic male sterility (CMS) conferred by the cytoplasm from Gossypium harknessii (D2) is an important system for hybrid seed production in Upland cotton (G. hirsutum). The male sterility of CMS-D2 (i.e., A line) can be restored to fertility by a restorer (i.e., R line) carrying the restorer gene Rf1 transferred from the D2 nuclear genome. However, the molecular mechanisms of CMS-D2 and its restoration are poorly understood. In this study, a genome-wide comparative transcriptome analysis was performed to identify differentially expressed genes (DEGs) in flower buds among the isogenic fertile R line and sterile A line derived from a backcross population (BC 8 F 1 ) and the recurrent parent, i.e., the maintainer (B line). A total of 1464 DEGs were identified among the three isogenic lines, and the Rf1-carrying Chr_D05 and its homeologous Chr_A05 had more DEGs than other chromosomes. The results of GO and KEGG enrichment analysis showed differences in circadian rhythm between the fertile and sterile lines. Eleven DEGs were selected for validation using qRT-PCR, confirming the accuracy of the RNA-seq results. Through genome-wide comparative transcriptome analysis, the differential expression profiles of CMS-D2 and its maintainer and restorer lines in Upland cotton were identified. Our results provide an important foundation for further studies into the molecular mechanisms of the interactions between the restorer gene Rf1 and the CMS-D2 cytoplasm.

  1. Operational experience with the GEM detector assembly lines for the CMS forward muon upgrade

    CERN Document Server

    Vai, Ilaria

    2017-01-01

    The CMS Collaboration has been developing large-area Triple-GEM detectors to be installed in the muon endcap regions of the CMS experiment in 2019 to maintain forward muon trigger and tracking performance at the HL-LHC. Ten pre-production detectors were built at CERN to commission the first assembly line and the quality controls. These were installed in the CMS detector in early 2017 and are currently participating in the 2017 LHC run. The collaboration has prepared several additional assembly and quality control lines for distributed mass production of 160 GEM detectors at various sites worldwide. During 2017, these additional production sites have been optimizing construction techniques and quality control procedures and validating them against common specifications by constructing additional pre-production detectors. Using the specific experience from one production site as an example, we discuss how the quality controls make use of independent hardware and trained personnel to ensure fast and reliable pro...

  2. Analisis Tingkat Pemahaman Konsep Siswa Kelas XI IPA Sman 3 Mataram Menggunakan One Tier Dan Two Tier Test Materi Kelarutan Dan Hasil Kali Kelarutan

    OpenAIRE

    Nabilah, Nabilah; Andayani, Yayuk; Laksmiwati, Dwi

    2013-01-01

    : The objective of this research was to analyzed conceptual understanding level of XI science grade students of SMAN 3 Mataram by used one-tier and two-tier test in solubility and solubility product subject. One-tier test are examined to XI IPA 4 grade students and two-tier test to XI IPA 5 grade students. The results of conceptual understanding using one-tier test (57,4%) are higher than using two-tier test (21,03%). One-tier test only showed the students's conceptual understanding, whereas ...

  3. Beam test results of the first full-scale prototype of CMS RE 1/2 resistive plate chamber

    International Nuclear Information System (INIS)

    Ying Jun; Ban Yong; Ye Yanlin; Cai Jianxin; Qian Sijin; Wang Quanjin; Liu Hongtao

    2005-01-01

    The authors reported the muon beam test results of the first full-scale prototype of CMS RE 1/2 Resistive Plate Chamber (RPC). The bakelite surface is treated using a special technology without oil to make it smooth enough. The full scale RE 1/2 RPC with honeycomb supporting frame is strong and thin enough to be fitted to the limited space of CMS design for the inner Forward RPC. The muon beam test was performed at CERN Gamma Irradiation Facility (GIF). The detection efficiency of this full scale RPC prototype is >95% even at very high irradiation background. The time resolution (less than 1.2 ns) and spatial resolution are satisfactory for the muon trigger device in future CMS experiments. The noise rate is also calculated and discussed

  4. CMS Central Hadron Calorimeter

    OpenAIRE

    Budd, Howard S.

    2001-01-01

    We present a description of the CMS central hadron calorimeter. We describe the production of the 1996 CMS hadron testbeam module. We show the results of the quality control tests of the testbeam module. We present some results of the 1995 CMS hadron testbeam.

  5. CMS Records Schedule

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Records Schedule provides disposition authorizations approved by the National Archives and Records Administration (NARA) for CMS program-related records...

  6. CMS analysis school model

    International Nuclear Information System (INIS)

    Malik, S; Bloom, K; Shipsey, I; Cavanaugh, R; Klima, B; Chan, Kai-Feng; D'Hondt, J; Narain, M; Palla, F; Rolandi, G; Schörner-Sadenius, T

    2014-01-01

    To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born over three years ago at the LPC (LHC Physics Centre), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe, CMS is trying to engage the collaboration in its discovery potential and maximize physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents, in the development of physics, service, upgrade, education of those new to CMS and the career development of younger members. An extension of the concept to the dedicated software and hardware schools is also planned, keeping in mind the ensuing upgrade phase.

  7. CMS Analysis School Model

    Energy Technology Data Exchange (ETDEWEB)

    Malik, S. [Nebraska U.; Shipsey, I. [Purdue U.; Cavanaugh, R. [Illinois U., Chicago; Bloom, K. [Nebraska U.; Chan, Kai-Feng [Taiwan, Natl. Taiwan U.; D' Hondt, J. [Vrije U., Brussels; Klima, B. [Fermilab; Narain, M. [Brown U.; Palla, F. [INFN, Pisa; Rolandi, G. [CERN; Schörner-Sadenius, T. [DESY

    2014-01-01

    To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born over three years ago at the LPC (LHC Physics Centre), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe, CMS is trying to engage the collaboration in its discovery potential and maximize physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents, in the development of physics, service, upgrade, education of those new to CMS and the career development of younger members. An extension of the concept to the dedicated software and hardware schools is also planned, keeping in mind the ensuing upgrade phase.

  8. 23 CFR 971.214 - Federal lands congestion management system (CMS).

    Science.gov (United States)

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false Federal lands congestion management system (CMS). 971... Federal lands congestion management system (CMS). (a) For purposes of this section, congestion means the...) Develop criteria to determine when a CMS is to be implemented for a specific FH; and (2) Have CMS coverage...

  9. CMS geometry through 2020

    International Nuclear Information System (INIS)

    Osborne, I; Brownson, E; Eulisse, G; Jones, C D; Sexton-Kennedy, E; Lange, D J

    2014-01-01

    CMS faces real challenges with upgrade of the CMS detector through 2020 and beyond. One of the challenges, from the software point of view, is managing upgrade simulations with the same software release as the 2013 scenario. We present the CMS geometry description software model, its integration with the CMS event setup and core software. The CMS geometry configuration and selection is implemented in Python. The tools collect the Python configuration fragments into a script used in CMS workflow. This flexible and automated geometry configuration allows choosing either transient or persistent version of the same scenario and specific version of the same scenario. We describe how the geometries are integrated and validated, and how we define and handle different geometry scenarios in simulation and reconstruction. We discuss how to transparently manage multiple incompatible geometries in the same software release. Several examples are shown based on current implementation assuring consistent choice of scenario conditions. The consequences and implications for multiple/different code algorithms are discussed.

  10. CMS Program Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Office of Enterprise Data and Analytics has developed CMS Program Statistics, which includes detailed summary statistics on national health care, Medicare...

  11. CMS Drug Spending

    Data.gov (United States)

    U.S. Department of Health & Human Services — CMS has released several information products that provide spending information for prescription drugs in the Medicare and Medicaid programs. The CMS Drug Spending...

  12. CMS Awards

    CERN Multimedia

    2004-01-01

    Ali Mohammad Rafiee receives the CMS Gold Award from Michel Della Negra of CMS. As part of the fifth annual CMS Awards, Iranian contractor HEPCO, located in Arak, an industrial town 200 km west of Tehran, received their Gold Award in a ceremony held on 14 June 2004 (the other award winners were reported in bulletin 13/2004). The Awards are given each year to a small number of the approximately one thousand contractors working on the CMS project. Gold Awards are given for outstanding technical achievement in work carried out for the detector. HEPCO received the Award for the excellent quality of their work in constructing two 25 tonne support tables, two 75 tonne shields (FCS) and eight supporting brackets to lower the HF into the cavern. Welds and machining obtained tolerances that were very difficult in structures of that size. Mr. A. M. Rafiee, the General Manager of the company, acknowledged the benefits of this collaboration, and thanked the efforts and skills of the many staff involved.

  13. Hiding the Complexity: Building a Distributed ATLAS Tier-2 with a Single Resource Interface using ARC Middleware

    International Nuclear Information System (INIS)

    Purdie, S; Stewart, G; Skipsey, S; Washbrook, A; Bhimji, W; Filipcic, A; Kenyon, M

    2011-01-01

    Since their inception, Grids for high energy physics have found management of data to be the most challenging aspect of operations. This problem has generally been tackled by the experiment's data management framework controlling in fine detail the distribution of data around the grid and the careful brokering of jobs to sites with co-located data. This approach, however, presents experiments with a difficult and complex system to manage as well as introducing a rigidity into the framework which is very far from the original conception of the grid. In this paper we describe how the ScotGrid distributed Tier-2, which has sites in Glasgow, Edinburgh and Durham, was presented to ATLAS as a single, unified resource using the ARC middleware stack. In this model the ScotGrid 'data store' is hosted at Glasgow and presented as a single ATLAS storage resource. As jobs are taken from the ATLAS PanDA framework, they are dispatched to the computing cluster with the fastest response time. An ARC compute element at each site then asynchronously stages the data from the data store into a local cache hosted at each site. The job is then launched in the batch system and accesses data locally. We discuss the merits of this system compared to other operational models and consider, from the point of view of the resource providers (sites), and from the resource consumers (experiments); and consider issues involved in transitions to this model.

  14. Validation of software releases for CMS

    International Nuclear Information System (INIS)

    Gutsche, Oliver

    2010-01-01

    The CMS software stack currently consists of more than 2 Million lines of code developed by over 250 authors with a new version being released every week. CMS has setup a validation process for quality assurance which enables the developers to compare the performance of a release to previous releases and references. The validation process provides the developers with reconstructed datasets of real data and MC samples. The samples span the whole range of detector effects and important physics signatures to benchmark the performance of the software. They are used to investigate interdependency effects of all CMS software components and to find and fix bugs. The release validation process described here is an integral part of CMS software development and contributes significantly to ensure stable production and analysis. It represents a sizable contribution to the overall MC production of CMS. Its success emphasizes the importance of a streamlined release validation process for projects with a large code basis and significant number of developers and can function as a model for future projects.

  15. Job life cycle management libraries for CMS workflow management projects

    International Nuclear Information System (INIS)

    Lingen, Frank van; Wilkinson, Rick; Evans, Dave; Foulkes, Stephen; Afaq, Anzar; Vaandering, Eric; Ryu, Seangchan

    2010-01-01

    Scientific analysis and simulation requires the processing and generation of millions of data samples. These tasks are often comprised of multiple smaller tasks divided over multiple (computing) sites. This paper discusses the Compact Muon Solenoid (CMS) workflow infrastructure, and specifically the Python based workflow library which is used for so called task lifecycle management. The CMS workflow infrastructure consists of three layers: high level specification of the various tasks based on input/output data sets, life cycle management of task instances derived from the high level specification and execution management. The workflow library is the result of a convergence of three CMS sub projects that respectively deal with scientific analysis, simulation and real time data aggregation from the experiment. This will reduce duplication and hence development and maintenance costs.

  16. Using Brief Experimental Analysis to Intensify Tier 3 Reading Interventions

    Science.gov (United States)

    Coolong-Chaffin, Melissa; Wagner, Dana

    2015-01-01

    As implementation of multi-tiered systems of support becomes common practice across the nation, practitioners continue to need strategies for intensifying interventions and supports for the subset of students who fail to make adequate progress despite strong programs at Tiers 1 and 2. Experts recommend making several changes to the structure and…

  17. CMS fact sheet : to give an overview of the basic facts on the CMS Detector, its aims and collaboration

    CERN Multimedia

    CMS, Outreach

    2010-01-01

    2-sided color print A4 size sheet containing the facts on the CMS Detector, its name, what it is designed to do, questions scientists hope to answer, collaboration members, detector parts and their functions, and other miscellaneous facts on the CMS detector

  18. Xrootd Monitoring for the CMS Experiment

    International Nuclear Information System (INIS)

    Bauerdick, L A T; Bloom, K; Bockelman, B; Bradley, D C; Dasu, S; Sfiligoi, I; Tadel, A; Tadel, M; Wuerthwein, F; Yagil, A

    2012-01-01

    During spring and summer of 2011, CMS deployed Xrootd-based access for all US T1 and T2 sites. This allows for remote access to all experiment data on disk in the US. It is used for user analysis, visualization, running of jobs at computing sites when data is not available at local sites, and as a fail-over mechanism for data access in jobs. Monitoring of this Xrootd infrastructure is implemented on three levels. Basic service and data availability checks are performed by Nagios probes. The second level uses Xrootd's “summary data” stream; this data is aggregated from all sites and fed into a MonALISA service providing visualization and storage. The third level uses Xrootd's “detailed monitoring” stream, which includes detailed information about users, opened files and individual data transfers. A custom application was developed to process this information. It currently provides a real-time view of the system usage and can store data into ROOT files for detailed analysis. Detailed monitoring allows us to determine dataset popularity and to detect abuses of the system, including sub-optimal usage of the Xrootd protocol and the ROOT prefetching mechanism.

  19. Xrootd monitoring for the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Bauerdick, L. A.T. [Fermilab; Bloom, K. [Nebraska U.; Bockelman, B. [Nebraska U.; Bradley, D. C. [Wisconsin U., Madison; Dasu, S. [Wisconsin U., Madison; Sfiligoi, I. [UC, San Diego; Tadel, A. [UC, San Diego; Tadel, M. [UC, San Diego; Wuerthwein, F. [UC, San Diego; Yagil, A. [UC, San Diego

    2012-01-01

    During spring and summer of 2011, CMS deployed Xrootd-based access for all US T1 and T2 sites. This allows for remote access to all experiment data on disk in the US. It is used for user analysis, visualization, running of jobs at computing sites when data is not available at local sites, and as a fail-over mechanism for data access in jobs. Monitoring of this Xrootd infrastructure is implemented on three levels. Basic service and data availability checks are performed by Nagios probes. The second level uses Xrootd's summary data stream, this data is aggregated from all sites and fed into a MonALISA service providing visualization and storage. The third level uses Xrootd's detailed monitoring stream, which includes detailed information about users, opened files and individual data transfers. A custom application was developed to process this information. It currently provides a real-time view of the system usage and can store data into ROOT files for detailed analysis. Detailed monitoring allows us to determine dataset popularity and to detect abuses of the system, including sub-optimal usage of the Xrootd protocol and the ROOT prefetching mechanism.

  20. 42 CFR 411.386 - CMS's advisory opinions as exclusive.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false CMS's advisory opinions as exclusive. 411.386... Relationships Between Physicians and Entities Furnishing Designated Health Services § 411.386 CMS's advisory... described in § 411.370. CMS has not and does not issue a binding advisory opinion on the subject matter in...

  1. CMS Data Analysis School Model

    CERN Document Server

    Malik, Sudhir; Cavanaugh, R; Bloom, K; Chan, Kai-Feng; D'Hondt, J; Klima, B; Narain, M; Palla, F; Rolandi, G; Schörner-Sadenius, T

    2014-01-01

    To impart hands-on training in physics analysis, CMS experiment initiated the  concept of CMS Data Analysis School (CMSDAS). It was born three years ago at the LPC (LHC Physics Center), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe , CMS is trying to  engage the collaboration discovery potential and maximize the physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents of CMS, in the development of physics, service, upgrade, education of those new to CMS and the caree...

  2. Thin Double-gap RPCs for the Phase-2 Upgrade of the CMS Muon System

    CERN Document Server

    Lee, Kyong Sei

    2017-01-01

    High-sensitive double-gap phenolic Resistive Plate Chambers are studied for the Phase-2 upgrade of the CMS muon system at high pseudorapidity $\\eta$. Whereas the present CMS RPCs have a gas gap thickness of 2 mm, we propose to use thinner gas gaps, which will improve the performance of these RPCs. To validate this proposal, we constructed double-gap RPCs with two different gap thicknesses of 1.2 and 1.4 mm using high-pressure laminated plates having a mean resistivity of about 5 $\\times$ 10$^{10}$ $\\Omega$-cm. This paper presents test results using cosmic muons and $^{137}$Cs gamma rays. The rate capabilities of these thin-gap RPCs measured with the gamma source exceed the maximum rate expected in the new high-$\\eta$ endcap RPCs planned for future Phase-2 runs of LHC.

  3. Higher tier field research in ecological risk assessment: a case study

    Energy Technology Data Exchange (ETDEWEB)

    Faber, J. [Alterra, Wageningen (Netherlands)

    2003-07-01

    A newly developed basic procedure for site-specific ecological risk assessment in The Netherlands was followed in practice for the first time. In line with conventional Triade approaches, the procedure includes multidisciplinary parameters from environmental chemistry, toxicology and ecology to provide multiple weight of evidence. However, land use at the contaminated site and its vicinity is given more importance, and research parameters are selected in accordance to specific objectives for land use in order to test for harmful effects to underlying ecosystem services. Moreover, the approach is characterized by repetitive interactions between stakeholders and researching consultants, in particular with respect to the choice of parameters and criteria to assess the results. The approach was followed in an ecological risk assessment to test the assumptions underlying a soil management plant for a rural area in The Netherlands, called 'Krimpenerwaard'. Throughout this region some 5000 polder ditches have been filled with waste materials originating from local households, waterway sludge, industrial wastes, car shredders, and more. Several sites are severely polluted by heavy metals, cyanide, PAH or chlorinated hydrocarbons and require remediation or clean up. However, the exact distribution of these wastes over the entire region is scarcely known, and the Krimpenerwaard as a whole is treated as one case of serious soil pollution. A soil management plan was constructed by 13 stakeholding parties, aiming for a 'functional clean up' in view of land use, by means of covering 'suspected' categories of wastes with a 30-cm layer of local type soil. The ecological risk assessment aims to verify the assumptions in the soil management plan regarding the prevention of possible undesirable effects induced by the various waste materials. A tiered approach is followed, including a screening for bioavailable contaminants, a testing for general effects

  4. Tiered gasoline pricing: A personal carbon trading perspective

    International Nuclear Information System (INIS)

    Li, Yao; Fan, Jin; Zhao, Dingtao; Wu, Yanrui; Li, Jun

    2016-01-01

    This paper proffers a tiered gasoline pricing method from a personal carbon trading perspective. An optimization model of personal carbon trading is proposed, and then, an equilibrium carbon price is derived according to the market clearing condition. Based on the derived equilibrium carbon price, this paper proposes a calculation method of tiered gasoline pricing. Then, sensitivity analyses and consumers' surplus analyses are conducted. It can be shown that a rise in gasoline price or a more generous allowance allocation would incur a decrease in the equilibrium carbon price, making the first tiered price higher, but the second tiered price lower. It is further verified that the proposed tiered pricing method is progressive because it would relieve the pressure of the low-income groups who consume less gasoline while imposing a greater burden on the high-income groups who consume more gasoline. Based on these results, implications, limitations and suggestions for future studies are provided. - Highlights: • Tiered gasoline pricing is calculated from the perspective of PCT. • Consumers would be burdened with different actual gasoline costs. • A specific example is provided to illustrate the calculation of TGP. • The tiered pricing mechanism is a progressive system.

  5. 42 CFR 411.379 - When CMS accepts a request.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false When CMS accepts a request. 411.379 Section 411.379... Physicians and Entities Furnishing Designated Health Services § 411.379 When CMS accepts a request. (a) Upon receiving a request for an advisory opinion, CMS promptly makes an initial determination of whether the...

  6. CMS Comic Book Brochure

    CERN Document Server

    2006-01-01

    To raise students' awareness of what the CMS detector is, how it was constructed and what it hopes to find. Titled "CMS Particle Hunter," this colorful comic book style brochure explains to young budding scientists and science enthusiasts in colorful animation how the CMS detector was made, its main parts, and what scientists hope to find using this complex tool.

  7. 42 CFR 405.1063 - Applicability of laws, regulations and CMS Rulings.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Applicability of laws, regulations and CMS Rulings... Medicare Coverage Policies § 405.1063 Applicability of laws, regulations and CMS Rulings. (a) All laws and... the MAC. (b) CMS Rulings are published under the authority of the Administrator, CMS. Consistent with...

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  9. Four Tiers

    Science.gov (United States)

    Moodie, Gavin

    2009-01-01

    This paper posits a classification of tertiary education institutions into four tiers: world research universities, selecting universities, recruiting universities, and vocational institutes. The distinguishing characteristic of world research universities is their research strength, the distinguishing characteristic of selecting universities is…

  10. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  11. CMS users data management service integration and first experiences with its NoSQL data storage

    CERN Document Server

    Riahi, H; Cinquilli, M; Hernandez, J M; Konstantinov, P; Mascheroni, M; Santocchia, A

    2014-01-01

    The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site.The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly 200k users files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and repor...

  12. Physics with CMS and Electronic Upgrades

    Energy Technology Data Exchange (ETDEWEB)

    Rohlf, James W. [Boston Univ., MA (United States)

    2016-08-01

    The current funding is for continued work on the Compact Muon Solenoid (CMS) at the CERN Large Hadron Collider (LHC) as part of the Energy Frontier experimental program. The current budget year covers the first year of physics running at 13 TeV (Run 2). During this period we have concentrated on commisioning of the μTCA electronics, a new standard for distribution of CMS trigger and timing control signals and high bandwidth data aquistiion as well as participating in Run 2 physics.

  13. INFN Tier-1 Testbed Facility

    International Nuclear Information System (INIS)

    Gregori, Daniele; Cavalli, Alessandro; Dell'Agnello, Luca; Dal Pra, Stefano; Prosperini, Andrea; Ricci, Pierpaolo; Ronchieri, Elisabetta; Sapunenko, Vladimir

    2012-01-01

    INFN-CNAF, located in Bologna, is the Information Technology Center of National Institute of Nuclear Physics (INFN). In the framework of the Worldwide LHC Computing Grid, INFN-CNAF is one of the eleven worldwide Tier-1 centers to store and reprocessing Large Hadron Collider (LHC) data. The Italian Tier-1 provides the resources of storage (i.e., disk space for short term needs and tapes for long term needs) and computing power that are needed for data processing and analysis to the LHC scientific community. Furthermore, INFN Tier-1 houses computing resources for other particle physics experiments, like CDF at Fermilab, SuperB at Frascati, as well as for astro particle and spatial physics experiments. The computing center is a very complex infrastructure, the hardaware layer include the network, storage and farming area, while the software layer includes open source and proprietary software. Software updating and new hardware adding can unexpectedly deteriorate the production activity of the center: therefore a testbed facility has been set up in order to reproduce and certify the various layers of the Tier-1. In this article we describe the testbed and the checks performed.

  14. Technology Tiers

    DEFF Research Database (Denmark)

    Karlsson, Christer

    2015-01-01

    A technology tier is a level in a product system: final product, system, subsystem, component, or part. As a concept, it contrasts traditional “vertical” special technologies (for example, mechanics and electronics) and focuses “horizontal” feature technologies such as product characteristics...

  15. Evacuation drill at CMS

    CERN Multimedia

    Niels Dupont-Sagorin and Christoph Schaefer

    2012-01-01

    Training personnel, including evacuation guides and shifters, checking procedures, improving collaboration with the CERN Fire Brigade: the first real-life evacuation drill at CMS took place on Friday 3 February from 12p.m. to 3p.m. in the two caverns located at Point 5 of the LHC.   CERN personnel during the evacuation drill at CMS. Evacuation drills are required by law and have to be organized periodically in all areas of CERN, both above and below ground. The last drill at CMS, which took place in June 2007, revealed some desiderata, most notably the need for a public address system. With this equipment in place, it is now possible to broadcast audio messages from the CMS control room to the underground areas.   The CMS Technical Coordination Team and the GLIMOS have focused particularly on preparing collaborators for emergency situations by providing training and organizing regular safety drills with the HSE Unit and the CERN Fire Brigade. This Friday, the practical traini...

  16. Report on Tier-0 Scaling Tests

    CERN Multimedia

    M. Branco; L. Goossens; A. Nairz

    To get prepared for handling the enormous data rates and volumes during LHC operation, ATLAS is currently running so-called Tier-0 Scaling Tests, which were started beginning of November and will last until Christmas. These tests are carried out in the context of LCG (LHC Computing Grid) Service Challenge 3 (SC3), a joint exercise of CERN IT and the LHC experiments to test the infrastructure of computing, network, and data management, in particular for its architecture, scalabilty and readiness for LHC data taking. ATLAS has adopted a multi-Tier hierarchical model to organise the workflow, with dedicated tasks to be performed at the individual levels in the Tier hierarchy. The Tier-0 centre located at CERN will be responsible for performing a first-pass reconstruction of the data arriving from the Event Filter farm, thus producing Event Summary Data (ESDs), Analysis Object Data (AODs) and event Tags, for processing calibration and alignment information, for archiving both raw and reconstructed data, and for ...

  17. Identifying tier one key suppliers.

    Science.gov (United States)

    Wicks, Steve

    2013-01-01

    In today's global marketplace, businesses are becoming increasingly reliant on suppliers for the provision of key processes, activities, products and services in support of their strategic business goals. The result is that now, more than ever, the failure of a key supplier has potential to damage reputation, productivity, compliance and financial performance seriously. Yet despite this, there is no recognised standard or guidance for identifying a tier one key supplier base and, up to now, there has been little or no research on how to do so effectively. This paper outlines the key findings of a BCI-sponsored research project to investigate good practice in identifying tier one key suppliers, and suggests a scalable framework process model and risk matrix tool to help businesses effectively identify their tier one key supplier base.

  18. CMS computing upgrade and evolution

    CERN Document Server

    Hernandez Calama, Jose

    2013-01-01

    The distributed Grid computing infrastructure has been instrumental in the successful exploitation of the LHC data leading to the discovery of the Higgs boson. The computing system will need to face new challenges from 2015 on when LHC restarts with an anticipated higher detector output rate and event complexity, but with only a limited increase in the computing resources. A more efficient use of the available resources will be mandatory. CMS is improving the data storage, distribution and access as well as the processing efficiency. Remote access to the data through the WAN, dynamic data replication and deletion based on the data access patterns, and separation of disk and tape storage are some of the areas being actively developed. Multi-core processing and scheduling is being pursued in order to make a better use of the multi-core nodes available at the sites. In addition, CMS is exploring new computing techniques, such as Cloud Computing, to get access to opportunistic resources or as a means of using wit...

  19. 23 CFR 970.214 - Federal lands congestion management system (CMS).

    Science.gov (United States)

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false Federal lands congestion management system (CMS). 970....214 Federal lands congestion management system (CMS). (a) For purposes of this section, congestion...) Develop criteria to determine when a CMS is to be implemented for a specific transportation system; and (2...

  20. CMS tracker slides into centre stage

    CERN Document Server

    2006-01-01

    As preparations for the magnet test and cosmic challenge get underway, a prototype tracker has been carefully inserted into the centre of CMS. The tracker, in its special platform, is slowly inserted into the centre of CMS. The CMS prototype tracker to be used for the magnet test and cosmic challenge coming up this summer has the same dimensions -2.5 m in diameter and 6 m in length- as the real one and tooling exactly like it. However, the support tube is only about 1% equipped, with 2 m2 of silicon detectors installed out of the total 200 m2. This is already more than any LEP experiment ever used and indicates the great care needed to be taken by engineers and technicians as these fragile detectors were installed and transported to Point 5. Sixteen thousand silicon detectors with a total of about 10 million strips will make up the full tracker. So far, 140 modules with about 100 000 strips have been implanted into the prototype tracker. These silicon strips will provide precision tracking for cosmic muon...

  1. Uplink Interference Analysis for Two-tier Cellular Networks with Diverse Users under Random Spatial Patterns

    OpenAIRE

    Bao, Wei; Liang, Ben

    2013-01-01

    Multi-tier architecture improves the spatial reuse of radio spectrum in cellular networks, but it introduces complicated heterogeneity in the spatial distribution of transmitters, which brings new challenges in interference analysis. In this work, we present a stochastic geometric model to evaluate the uplink interference in a two-tier network considering multi-type users and base stations. Each type of tier-1 users and tier-2 base stations are modeled as independent homogeneous Poisson point...

  2. International Masterclass at CMS

    CERN Multimedia

    Lapka, M

    2012-01-01

    The CMS collaboration welcomed a class of French high school students to the CERN facility in Meyrin, Switzerland on the 12 of March, 2012. Students spent the day meeting with physicists, hearing talks, asking questions, and participating in a hands-on exercise using real data collected by the CMS experiment on the Large Hadron Colider. Talks and other resources are available here: http://ippog-dev.web.cern.ch/resources/2012/ippog-international-masterclass-2012-cms

  3. The grand descent has begun for CMS

    CERN Multimedia

    2006-01-01

    Until recently, the CMS experimental cavern looked relatively empty; its detector was assembled entirely at ground level, to be lowered underground in 15 sections. On 2 November, the first hadronic forward calorimeter led the way with a grand descent. The first section of the CMS detector (centre of photo) arriving from the vertical shaft, viewed from the cavern floor. There is something unusual about the construction of the CMS detector. Instead of being built in the experimental cavern, like all the other detectors in the LHC experiments, it was constructed at ground level. This was to allow for easy access during the assembly of the detector and to minimise the size of the excavated cavern. The slightly nerve-wracking task of lowering it safely into the cavern in separate sections came after the complete detector was successfully tested with a magnetic field at ground level. In the early morning of 2 November, the first section of the CMS detector began its eagerly awaited descent into the underground ca...

  4. 42 CFR 411.382 - CMS's right to rescind advisory opinions.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false CMS's right to rescind advisory opinions. 411.382... Relationships Between Physicians and Entities Furnishing Designated Health Services § 411.382 CMS's right to rescind advisory opinions. Any advice CMS gives in an opinion does not prejudice its right to reconsider...

  5. Top-tier requirements for KNGR

    International Nuclear Information System (INIS)

    Sung-Jae, Ch.; Kwangho, L.; Dong Wook, J.

    1996-01-01

    In 1992, Korea Electric Power Corporation (KEPCO) has launched the next generation reactor project to develop the standard design of an advanced pressurized water reactor by 2000. This advanced reactor aims to have the sufficient capability to be a safe, environmentally sound and economical energy source for 2000's in Korea. In conjunction with the project development, the program phase I is studied and it is in the Korean Next Generation Reactor (KNGR) first phase project that the requirements of this specification called ''Top-tier'' have been established. These functional requirements are of the first importance for the design, construction and operation of a nuclear power plant. These requirements are divided into safety requirements, serious accidents control, design base requirements, definition of the system characteristics, performance, construction feasibility, economical objectives, site parameters and design processes. The ''Top-tier'' requirements are concentrated on the improvement of the safety and reliability. Safety is one of the first priorities. In particular, the requirements for the design of the next reactors generation must include the capacity to control serious accidents because when an accident occurs, the protection degree is crucial. The KNGR requirements include the existing nuclear power plants competitiveness as well as those of the coal thermal plants. Moreover, when safety is reinforced, the economic competitiveness can be assured. At the present time, a subsequent specification for the KNGR considering the bases of the domestic technology and experimenting the running. (O.M.)

  6. CMS Industries awarded gold, crystal

    CERN Multimedia

    2006-01-01

    The CMS collaboration honoured 10 of its top suppliers in the seventh annual awards ceremony The representatives of the firms that recieved the CMS Gold and Crystal Awards stand with their awards after the ceremony. The seventh annual CMS Awards ceremony was held on Monday 13 March to recognize the industries that have made substantial contributions to the construction of the collaboration's detector. Nine international firms received Gold Awards, and General Tecnica of Italy received the prestigious Crystal Award. Representatives from the companies attended the ceremony during the plenary session of CMS week. 'The role of CERN, its machines and experiments, beyond particle physics is to push the development of equipment technologies related to high-energy physics,'said CMS Awards Coordinator Domenico Campi. 'All of these industries must go beyond the technologies that are currently available.' Without the involvement of good companies over the years, the construction of the CMS detector wouldn't be possible...

  7. CMS MANAGEMENT MEETINGS

    CERN Multimedia

    2010-01-01

    The Agendas and Minutes of the Management Board meetings are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223 The Agendas and Minutes of the Collaboration Board meetings are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174

  8. A gLite FTS based solution for managing user output in CMS

    Energy Technology Data Exchange (ETDEWEB)

    Cinquilli, M. [CERN; Riahi, H. [INFN, Perugia; Spiga, D. [CERN; Grandi, C. [INFN, Bologna; Mancinelli, V. [CERN; Mascheroni, M. [CERN; Pepe, F. [INFN, Bologna; Vaandering, E. [Fermilab

    2012-01-01

    The CMS distributed data analysis workflow assumes that jobs run in a different location from where their results are finally stored. Typically the user output must be transferred across the network from one site to another, possibly on a different continent or over links not necessarily validated for high bandwidth/high reliability transfer. This step is named stage-out and in CMS was originally implemented as a synchronous step of the analysis job execution. However, our experience showed the weakness of this approach both in terms of low total job execution efficiency and failure rates, wasting precious CPU resources. The nature of analysis data makes it inappropriate to use PhEDEx, the core data placement system for CMS. As part of the new generation of CMS Workload Management tools, the Asynchronous Stage-Out system (AsyncStageOut) has been developed to enable third party copy of the user output. The AsyncStageOut component manages glite FTS transfers of data from the temporary store at the site where the job ran to the final location of the data on behalf of that data owner. The tool uses python daemons, built using the WMCore framework, and CouchDB, to manage the queue of work and FTS transfers. CouchDB also provides the platform for a dedicated operations monitoring system. In this paper, we present the motivations of the asynchronous stage-out system. We give an insight into the design and the implementation of key features, describing how it is coupled with the CMS workload management system. Finally, we show the results and the commissioning experience.

  9. CERN Researchers' Night @ CMS + TOTEM

    CERN Multimedia

    Hoch, Michael

    2011-01-01

    Young researchers' shifter training at CMS; • Introduction talk with discussion, • CMS control room shadowing the shifters • TOTEM control room introduction and discusson • Scientific poster work shop and presentation • Science Art installations ‘Faces of CMS’ & ‘Science Cloud’ • CMS Shift diploma presentation

  10. CMS MANAGEMENT MEETINGS

    CERN Multimedia

    The Agendas and Minutes of the Management Board meetings are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223  The Agendas and Minutes of the Collaboration Board meetings are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174 

  11. Characterization of the CBC2 readout ASIC for the CMS strip-tracker high-luminosity upgrade

    International Nuclear Information System (INIS)

    Braga, D; Hall, G; Pesaresi, M; Raymond, M; Jones, L; Murray, P; Prydderch, M

    2014-01-01

    The CMS Binary Chip 2 (CBC2) is a full-scale prototype ASIC developed for the front-end readout of the high-luminosity upgrade of the CMS silicon strip tracker. The 254-channel, 130 nm CMOS ASIC is designed for the binary readout of double-layer modules, and features cluster-width discrimination and coincidence logic for detecting high-P T track candidates. The chip was delivered in January 2013 and has since been bump-bonded to a dual-chip hybrid and extensively tested. The CBC2 is fully functional and working to specification: we present the result of electrical characterization of the chip, including gain, noise, threshold scan and power consumption, together with the performance of the stub finding logic. Finally we will outline the plan for future developments towards the production version

  12. RFI to CMS: An Approach to Regulatory Acceptance of Site Remediation Technologies

    Science.gov (United States)

    Rowland, Martin A.

    2001-01-01

    Lockheed Martin made a smooth transition from RCRA Facility Investigation (RFI) at the National Aeronautics and Space Administrations'(NASA) Michoud Assembly Facility (MA-F) to its Corrective Measures Study (CMS) phase within the RCRA Corrective Action Process. We located trichloroethylene (TCE) contamination that resulted from the manufacture of the Apollo Program Saturn V rocket and the Space Shuttle External Tank, began the cleanup, and identified appropriate technologies for final remedies. This was accomplished by establishing a close working relationship with the state environmental regulatory agency through each step of the process, and resulted in receiving approvals for each of those steps. The agency has designated Lockheed Martin's management of the TCE-contamination at the MAF site as a model for other manufacturing sites in a similar situation. In February 1984, the Louisiana Department of Environmental Quality (LDEQ) issued a compliance order to begin the clean up of groundwater contaminated with TCE. In April 1984 Lockheed Martin began operating a groundwater recovery well to capture the TCE plume. The well not only removes contaminants, but also sustains an inward groundwater hydraulic gradient so that the potential offsite migration of the TCE plume is greatly diminished. This effort was successful, and for the agency to give orders and for a regulated industry to follow them is standard procedure, but this is a passive approach to solving environmental problems. The goal of the company thereafter was to take a leadership, proactive role and guide the MAF contamination clean up to its best conclusion at minimum time and lowest cost to NASA. To accomplish this goal, we have established a positive working relationship with LDEQ, involving them interactively in the implementation of advanced remedial activities at MAF as outlined in the following paragraphs.

  13. Involvement of p38 MAPK- and JNK-modulated expression of Bcl-2 and Bax in Naja nigricollis CMS-9-induced apoptosis of human leukemia K562 cells.

    Science.gov (United States)

    Chen, Ying-Jung; Liu, Wen-Hsin; Kao, Pei-Hsiu; Wang, Jeh-Jeng; Chang, Long-Sen

    2010-06-15

    CMS-9, a phospholipase A(2) (PLA(2)) isolated from Naja nigricollis venom, induced apoptosis of human leukemia K562 cells, characterized by mitochondrial depolarization, modulation of Bcl-2 family members, cytochrome c release and activation of caspases 9 and 3. Moreover, an increase in intracellular Ca2+ concentration and the production of reactive oxygen species (ROS) was noted. Pretreatment with BAPTA-AM (Ca2+ chelator) and N-acetylcysteine (NAC, ROS scavenger) proved that Ca2+ was an upstream event in inducing ROS generation. Upon exposure to CMS-9, activation of p38 MAPK and JNK was observed in K562 cells. BAPTA-AM or NAC abrogated CMS-9-elicited p38 MAPK and JNK activation, and rescued viability of CMS-9-treated K562 cells. SB202190 (p38 MAPK inhibitor) and SP600125 (JNK inhibitor) suppressed CMS-9-induced dissipation of mitochondrial membrane potential, Bcl-2 down-regulation, Bax up-regulation and increased mitochondrial translocation of Bax. Inactivation of PLA(2) activity reduced drastically the cytotoxicity of CMS-9, and a combination of lysophosphatidylcholine and stearic acid mimicked the cytotoxic effects of CMS-9. Taken together, our data suggest that CMS-9-induced apoptosis of K562 cells is catalytic activity-dependent and is mediated through mitochondria-mediated death pathway triggered by Ca2+/ROS-evoked p38 MAPK and JNK activation. 2010 Elsevier Ltd. All rights reserved.

  14. System tests with silicon strip module prototypes for the Phase-2-upgrade of the CMS tracker

    Energy Technology Data Exchange (ETDEWEB)

    Feld, Lutz; Karpinski, Waclaw; Klein, Katja; Preuten, Marius [I. Physikalisches Institut B, RWTH Aachen University (Germany)

    2016-07-01

    To prepare the CMS experiment for the High Luminosity LHC and its instantaneous luminosity of 5 . 10{sup 34} cm{sup -2}s{sup -1}, in the Long Shutdown 3 (around 2024) the CMS Silicon Tracker will be replaced. The Silicon Strip Modules for the new Tracker will host two vertically stacked sensors. The combination of hit information from both sensors will allow the estimation of the transverse momentum (p{sub T}) of charged particles in the module front-end. This can be used to identify hits from potential interesting high-p{sub T} tracks (above 2 GeV) for the first trigger level. The CMS Binary Chip (CBC) provides the analogue readout of two sensors and a digital section, into which the momentum discrimination is integrated. The modules will host a new DC-DC converter chain, which will allow individual powering of each module. First measurements with early prototypes on the interplay between DC-DC powering and the read-out functions of the module are presented in this talk.

  15. The CMS Data Quality Monitoring software experience and future improvements

    CERN Document Server

    De Guio, Federico

    2013-01-01

    The Data Quality Monitoring (DQM) Software proved to be a central tool in the CMS experiment. Its flexibility allowed its integration in several environments Online, for real-time detector monitoring; Offline, for the final, fine-grained Data Certification; Release Validation, to constantly validate the functionality and the performance of the reconstruction software; in Monte Carlo productions. The central tool to deliver Data Quality information is a web site for browsing data quality histograms (DQM GUI). In this contribution the usage of the DQM Software in the different environments and its integration in the CMS Reconstruction Software Framework and in all production workflows are presented.

  16. Next generation ATCA control infrastructure for the CMS Phase-2 upgrades

    CERN Document Server

    Smith, Wesley; Svetek, Aleš; Tikalsky, Jes; Fobes, Robert; Dasu, Sridhara; Smith, Wesley; Vicente, Marcelo

    2017-01-01

    A next generation control infrastructure to be used in Advanced TCA (ATCA) blades at CMS experiment is being designed and tested. Several ATCA systems are being prepared for the High-Luminosity LHC (HL-LHC) and will be installed at CMS during technical stops. The next generation control infrastructure will provide all the necessary hardware, firmware and software required in these systems, decreasing development time and increasing flexibility. The complete infrastructure includes an Intelligent Platform Management Controller (IPMC), a Module Management Controller (MMC) and an Embedded Linux Mezzanine (ELM) processing card.

  17. The CMS all silicon Tracker simulation

    CERN Document Server

    Biasini, Maurizio

    2009-01-01

    The Compact Muon Solenoid (CMS) tracker detector is the world's largest silicon detector with about 201 m$^2$ of silicon strips detectors and 1 m$^2$ of silicon pixel detectors. It contains 66 millions pixels and 10 million individual sensing strips. The quality of the physics analysis is highly correlated with the precision of the Tracker detector simulation which is written on top of the GEANT4 and the CMS object-oriented framework. The hit position resolution in the Tracker detector depends on the ability to correctly model the CMS tracker geometry, the signal digitization and Lorentz drift, the calibration and inefficiency. In order to ensure high performance in track and vertex reconstruction, an accurate knowledge of the material budget is therefore necessary since the passive materials, involved in the readout, cooling or power systems, will create unwanted effects during the particle detection, such as multiple scattering, electron bremsstrahlung and photon conversion. In this paper, we present the CM...

  18. Local school children curious about CMS

    CERN Multimedia

    Joannah Caborn Wengler

    2012-01-01

    Imagine the scene: about 20-30 schoolchildren aged 8-11 and about 1.25 m tall; a couple of adults, let’s say on average 1.75 m tall, and then one high-energy physics experiment 15 m tall. This is what you could have seen on 2, 6 and 9 February in the CMS cavern, as two local schools participated in the “Be a scientist!” programme.   "I think they've got it..." Two classes from the primary school in the village of Cessy, where CMS is located, took part in the visits on 2 and 9 February, and all 36 pupils from CM2 (Year 6) at the Ecole des Bois in nearby Ornex took part in the visit on 6 February. “They asked so many questions,” says Sandrine Saison Marsollier, CERN’s educational officer for the local community, who accompanied some of the classes to CMS. “Most of them had practical questions about what they saw, for example how big and how heavy the experiment is, and which bit goes where. But some ...

  19. A new dawn for CMS

    CERN Multimedia

    2007-01-01

    Supported by a gigantic crane and a factory-size room full of enthusiasm, the central barrel of CMS made its final journey underground on 28 February. The central section of the CMS detector starts its dramatic 10-hour descent underground.Several hours (and 100 metres) later, the massive barrel rests on the cavern floor. CMS scientists, journalists, photographers and members of the transport crew basked in the final rays of the 'solenoid-set' on 28 February as the central barrel of the CMS detector sinks below the horizon and began its ten-hour descent into the cavern 100 metres below. Thirteen metres long and weighing as much as five jumbo jets (1920 tonnes), the barrel is the largest of the 15 chunks of CMS detector that are being lowered one by one into the cavern. 'This is a challenging feat of engineering, as there are just 20 cm of leeway between the detector and the walls of the shaft,' said Austin Ball, Technical Coordinator of CMS. The section of the detector, which contains the solenoid of the magne...

  20. 45 CFR 150.203 - Circumstances requiring CMS enforcement.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Circumstances requiring CMS enforcement. 150.203... CARE ACCESS CMS ENFORCEMENT IN GROUP AND INDIVIDUAL INSURANCE MARKETS CMS Enforcement Processes for... requiring CMS enforcement. CMS enforces HIPAA requirements to the extent warranted (as determined by CMS) in...

  1. Changing the batch system in a Tier 1 computing center: why and how

    Science.gov (United States)

    Chierici, Andrea; Dal Pra, Stefano

    2014-06-01

    At the Italian Tierl Center at CNAF we are evaluating the possibility to change the current production batch system. This activity is motivated mainly because we are looking for a more flexible licensing model as well as to avoid vendor lock-in. We performed a technology tracking exercise and among many possible solutions we chose to evaluate Grid Engine as an alternative because its adoption is increasing in the HEPiX community and because it's supported by the EMI middleware that we currently use on our computing farm. Another INFN site evaluated Slurm and we will compare our results in order to understand pros and cons of the two solutions. We will present the results of our evaluation of Grid Engine, in order to understand if it can fit the requirements of a Tier 1 center, compared to the solution we adopted long ago. We performed a survey and a critical re-evaluation of our farming infrastructure: many production softwares (accounting and monitoring on top of all) rely on our current solution and changing it required us to write new wrappers and adapt the infrastructure to the new system. We believe the results of this investigation can be very useful to other Tier-ls and Tier-2s centers in a similar situation, where the effort of switching may appear too hard to stand. We will provide guidelines in order to understand how difficult this operation can be and how long the change may take.

  2. Higgs searches with CMS

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    The excellent performances of the LHC in the 2011 run are setting the grounds for the final chase of the Higgs boson. The CMS experiment is recording high quality data that are being thoroughly scrutinized. Several decay channels are investigated to probe the entire possible Higgs mass spectrum, from 110 to 600 GeV/c^2. The study of the first 1.5/fb of collected data places already tight limits and excludes large fractions of the Higgs mass range, leaving however still open the search in the theoretically favored low mass region. In this seminar we will report on the diverse CMS analyses that yield to such results describing the experimental challenges that each had to meet.

  3. 42 CFR 410.142 - CMS process for approving national accreditation organizations.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false CMS process for approving national accreditation... Diabetes Self-Management Training and Diabetes Outcome Measurements § 410.142 CMS process for approving national accreditation organizations. (a) General rule. CMS may approve and recognize a nonprofit or not...

  4. CMS Brochure (german version)

    CERN Multimedia

    Marcastel, F

    2007-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.

  5. CMS brochure (English version)

    CERN Document Server

    Marcastel, Fabienne

    2014-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which has started up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.

  6. CMS brochure (Spanish version)

    CERN Multimedia

    Lefevre, C

    2008-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.

  7. CMS brochure (English version)

    CERN Document Server

    2017-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which has started up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.CMS est la plus lourde des expériences du LHC, l'accélérateur de particules le plus puissant au monde qui a été mis en service en 2008. Les détecteurs de cette expérience polyvalente sont placés autour d'un puissant aimant supraconducteur.

  8. CMS brochure (French version)

    CERN Document Server

    2017-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which has started up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.CMS est la plus lourde des expériences du LHC, l'accélérateur de particules le plus puissant au monde qui a été mis en service en 2008. Les détecteurs de cette expérience polyvalente sont placés autour d'un puissant aimant supraconducteur.

  9. PENERAPAN ARSITEKTUR MULTI-TIER DENGAN DCOM DALAM SUATU SISTEM INFORMASI

    Directory of Open Access Journals (Sweden)

    Kartika Gunadi

    2001-01-01

    Full Text Available Information System implementation using two-tier architecture result lack in several critical issues : reuse component, scalability, maintenance, and data security. The multi-tiered client/server architecture provides a good resolution to solve these problems that using DCOM technology . The software is made by using Delphi 4 Client/Server Suite and Microsoft SQL Server V. 7.0 as a database server software. The multi-tiered application is partitioned into thirds. The first is client application which provides presentation services. The second is server application which provides application services, and the third is database server which provides database services. This multi-tiered application software can be made in two model. They are Client/Server Windows model and Client/Server Web model with ActiveX Form Technology. In this research is found that making multi-tiered architecture with using DCOM technology can provide many benefits such as, centralized application logic in middle-tier, make thin client application, distributed load of data process in several machines, increases security with the ability in hiding data, dan fast maintenance without installing database drivers in every client. Abstract in Bahasa Indonesia : Penerapan sistem informasi menggunakan two-tier architecture mempunyai banyak kelemahan : penggunaan kembali komponen, skalabilitas, perawatan, dan keamanan data. Multi-tier Client-Server architecture mempunyai kemampuan untuk memecahkan masalah ini dengan DCOM teknologi. Perangkat lunak ini dapat dibuat menggunakan Delphi 4 Client/Server Suite dan Microsoft SQL Server 7.0 sebagai perangkat lunak database. Aplikasi program multi-tier ini dibagi menjadi tiga partisi. Pertama adalah aplikasi client menyediakan presentasi servis, kedua aplikasi server menyediakan servis aplikasi, dan ketiga aplikasi database menyediakan database servis. Perangkat lunak aplikasi multi-tier ini dapat dibuat dalam dua model, yaitu client

  10. A multi-tiered architecture for content retrieval in mobile peer-to-peer networks.

    Science.gov (United States)

    2012-01-01

    In this paper, we address content retrieval in Mobile Peer-to-Peer (P2P) Networks. We design a multi-tiered architecture for content : retrieval, where at Tier 1, we design a protocol for content similarity governed by a parameter that trades accu...

  11. Tier 3 batch system data locality via managed caches

    Science.gov (United States)

    Fischer, Max; Giffels, Manuel; Jung, Christopher; Kühn, Eileen; Quast, Günter

    2015-05-01

    Modern data processing increasingly relies on data locality for performance and scalability, whereas the common HEP approaches aim for uniform resource pools with minimal locality, recently even across site boundaries. To combine advantages of both, the High- Performance Data Analysis (HPDA) Tier 3 concept opportunistically establishes data locality via coordinated caches. In accordance with HEP Tier 3 activities, the design incorporates two major assumptions: First, only a fraction of data is accessed regularly and thus the deciding factor for overall throughput. Second, data access may fallback to non-local, making permanent local data availability an inefficient resource usage strategy. Based on this, the HPDA design generically extends available storage hierarchies into the batch system. Using the batch system itself for scheduling file locality, an array of independent caches on the worker nodes is dynamically populated with high-profile data. Cache state information is exposed to the batch system both for managing caches and scheduling jobs. As a result, users directly work with a regular, adequately sized storage system. However, their automated batch processes are presented with local replications of data whenever possible.

  12. Achieving Tier 4 Emissions in Biomass Cookstoves

    Energy Technology Data Exchange (ETDEWEB)

    Marchese, Anthony [Colorado State Univ., Fort Collins, CO (United States); DeFoort, Morgan [Colorado State Univ., Fort Collins, CO (United States); Gao, Xinfeng [Colorado State Univ., Fort Collins, CO (United States); Tryner, Jessica [Colorado State Univ., Fort Collins, CO (United States); Dryer, Frederick L. [Princeton Univ., Princeton, NJ (United States); Haas, Francis [Princeton Univ., Princeton, NJ (United States); Lorenz, Nathan [Envirofit International, Fort Collins, CO (United States)

    2018-03-13

    Previous literature on top-lit updraft (TLUD) gasifier cookstoves suggested that these stoves have the potential to be the lowest emitting biomass cookstove. However, the previous literature also demonstrated a high degree of variability in TLUD emissions and performance, and a lack of general understanding of the TLUD combustion process. The objective of this study was to improve understanding of the combustion process in TLUD cookstoves. In a TLUD, biomass is gasified and the resulting producer gas is burned in a secondary flame located just above the fuel bed. The goal of this project is to enable the design of a more robust TLUD that consistently meets Tier 4 performance targets through a better understanding of the underlying combustion physics. The project featured a combined modeling, experimental and product design/development effort comprised of four different activities: Development of a model of the gasification process in the biomass fuel bed; Development of a CFD model of the secondary combustion zone; Experiments with a modular TLUD test bed to provide information on how stove design, fuel properties, and operating mode influence performance and provide data needed to validate the fuel bed model; Planar laser-induced fluorescence (PLIF) experiments with a two-dimensional optical test bed to provide insight into the flame dynamics in the secondary combustion zone and data to validate the CFD model; Design, development and field testing of a market ready TLUD prototype. Over 180 tests of 40 different configurations of the modular TLUD test bed were performed to demonstrate how stove design, fuel properties and operating mode influences performance, and the conditions under which Tier 4 emissions are obtainable. Images of OH and acetone PLIF were collected at 10 kHz with the optical test bed. The modeling and experimental results informed the design of a TLUD prototype that met Tier 3 to Tier 4 specifications in emissions and Tier 2 in efficiency. The

  13. Integration of Detectors Into a Large Experiment: Examples From ATLAS and CMS

    CERN Document Server

    Froidevaux, D

    2011-01-01

    Integration of Detectors Into a Large Experiment: Examples From ATLAS andCMS, part of 'Landolt-Börnstein - Group I Elementary Particles, Nuclei and Atoms: Numerical Data and Functional Relationships in Science and Technology, Volume 21B2: Detectors for Particles and Radiation. Part 2: Systems and Applications'. This document is part of Part 2 'Principles and Methods' of Subvolume B 'Detectors for Particles and Radiation' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the Chapter '5 Integration of Detectors Into a Large Experiment: Examples From ATLAS and CMS' with the content: 5 Integration of Detectors Into a Large Experiment: Examples From ATLAS and CMS 5.1 Introduction 5.1.1 The context 5.1.2 The main initial physics goals of ATLAS and CMS at the LHC 5.1.3 A snapshot of the current status of the ATLAS and CMS experiments 5.2 Overall detector concept and magnet systems 5.2.1 Overall detector concept 5.2.2 Magnet systems 5.2.2.1 Rad...

  14. Legacy2Drupal: Conversion of an existing relational oceanographic database to a Drupal 7 CMS

    Science.gov (United States)

    Work, T. T.; Maffei, A. R.; Chandler, C. L.; Groman, R. C.

    2011-12-01

    Content Management Systems (CMSs) such as Drupal provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have already designed and implemented customized schemas for their metadata. The NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) has ported an existing relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal 7 Content Management System. This is an update on an effort described as a proof-of-concept in poster IN21B-1051, presented at AGU2009. The BCO-DMO project has translated all the existing database tables, input forms, website reports, and other features present in the existing system into Drupal CMS features. The replacement features are made possible by the use of Drupal content types, CCK node-reference fields, a custom theme, and a number of other supporting modules. This presentation describes the process used to migrate content in the original BCO-DMO metadata database to Drupal 7, some problems encountered during migration, and the modules used to migrate the content successfully. Strategic use of Drupal 7 CMS features that enable three separate but complementary interfaces to provide access to oceanographic research metadata will also be covered: 1) a Drupal 7-powered user front-end; 2) REST-ful JSON web services (providing a Mapserver interface to the metadata and data; and 3) a SPARQL interface to a semantic representation of the repository metadata (this feeding a new faceted search capability currently under development). The existing BCO-DMO ontology, developed in collaboration with Rensselaer Polytechnic Institute's Tetherless World Constellation, makes strategic use of pre-existing ontologies and will be used to drive semantically-enabled faceted search capabilities planned for the site. At this point, the use of semantic

  15. Design of multi-tiered database application based on CORBA component

    International Nuclear Information System (INIS)

    Sun Xiaoying; Dai Zhimin

    2003-01-01

    As computer technology quickly developing, middleware technology changed traditional two-tier database system. The multi-tiered database system, consisting of client application program, application servers and database serves, is mainly applying. While building multi-tiered database system using CORBA component has become the mainstream technique. In this paper, an example of DUV-FEL database system is presented, and then discuss the realization of multi-tiered database based on CORBA component. (authors)

  16. Breaks Are Better: A Tier II Social Behavior Intervention

    Science.gov (United States)

    Boyd, R. Justin; Anderson, Cynthia M.

    2013-01-01

    Multi-tiered systems of social behavioral support in schools provide varying levels of intervention matched to student need. Tier I (primary or universal) systems are for all students and are designed to promote pro-social behavior. Tier III (tertiary or intensive) supports are for students who engage in serious challenging behavior that has not…

  17. Prospective Environmental Risk Assessment for Sediment-Bound Organic Chemicals: A Proposal for Tiered Effect Assessment.

    Science.gov (United States)

    Diepens, Noël J; Koelmans, Albert A; Baveco, Hans; van den Brink, Paul J; van den Heuvel-Greve, Martine J; Brock, Theo C M

    A broadly accepted framework for prospective environmental risk assessment (ERA) of sediment-bound organic chemicals is currently lacking. Such a framework requires clear protection goals, evidence-based concepts that link exposure to effects and a transparent tiered-effect assessment. In this paper, we provide a tiered prospective sediment ERA procedure for organic chemicals in sediment, with a focus on the applicable European regulations and the underlying data requirements. Using the ecosystem services concept, we derived specific protection goals for ecosystem service providing units: microorganisms, benthic algae, sediment-rooted macrophytes, benthic invertebrates and benthic vertebrates. Triggers for sediment toxicity testing are discussed.We recommend a tiered approach (Tier 0 through Tier 3). Tier-0 is a cost-effective screening based on chronic water-exposure toxicity data for pelagic species and equilibrium partitioning. Tier-1 is based on spiked sediment laboratory toxicity tests with standard benthic test species and standardised test methods. If comparable chronic toxicity data for both standard and additional benthic test species are available, the Species Sensitivity Distribution (SSD) approach is a more viable Tier-2 option than the geometric mean approach. This paper includes criteria for accepting results of sediment-spiked single species toxicity tests in prospective ERA, and for the application of the SSD approach. We propose micro/mesocosm experiments with spiked sediment, to study colonisation success by benthic organisms, as a Tier-3 option. Ecological effect models can be used to supplement the experimental tiers. A strategy for unifying information from various tiers by experimental work and exposure-and effect modelling is provided.

  18. Auger Physicists visit CMS

    CERN Multimedia

    Hoch, Michael

    2012-01-01

    Visit at CERN P5 CMS in the experimental cavern Alan Watson, Auger Spokesperson Emeritus, University of Leeds; Jim Cronin, Nobel Laureate, Auger Spokesperson Emeritus, University of Chicago; Jim Virdee, CMS Former Spokesperson, Imperial College; Jim Matthews, Auger Co-Spokesperson, Louisiana State University

  19. Study of the timing performance of the SKIROC2-CMS for the CMS HGCAL

    CERN Document Server

    Huiberts, Simon Kristian

    2017-01-01

    The High Luminosity phase of the LHC (starting operation in 2025) will provide unprecedented instantaneous and integrated luminosity, with 25 ns bunch crossing intervals and up to 140 pileup events. In this context, the High Granularity Calorimeter will provide electromagnetic and hadronic energy measurement in the forward direction of the upgraded CMS. The test beam campaign of the first HGCal modules, started in Summer 2016 at CERN with 8 fully equipped layers of the EE section, will continue in Summer 2017 aiming at the test of a full prototype including the electronic and the hadronic parts. The assessment of the calorimeter performance on a beam test bench is a fundamental phase for the development of a new detector, allowing to test the mechanical structure and electronic chain, characterize the modules performance and measure the shower developments for electrons and hadrons. The aim of the work was to determine the timing performance and the timing characteristics of the single module tested in May 2...

  20. 1990 Tier Two emergency and hazardous chemical inventory

    International Nuclear Information System (INIS)

    1991-03-01

    This document contains the 1990 Two Tier Emergency and Hazardous Chemical Inventory. Submission of this Tier Two form (when requested) is required by Title 3 of the Superfund Amendments and Reauthorization Act of 1986, Section 312, Public Law 99--499, codified at 42 U.S.C. Section 11022. The purpose of this Tier Two form is to provide State and local officials and the public with specific information on hazardous chemicals present at your facility during the past year

  1. Status and Trends in Networking at LHC Tier1 Facilities

    Science.gov (United States)

    Bobyshev, A.; DeMar, P.; Grigaliunas, V.; Bigrow, J.; Hoeft, B.; Reymund, A.

    2012-12-01

    The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Our analysis will include examination into the following areas: • Evolution of Tier1 centers to their current state • Evolving data center networking models and how they apply to Tier1 centers • Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers • Trends in WAN data movement and emergence of software-defined WAN network capabilities • Network virtualization

  2. Status and Trends in Networking at LHC Tier1 Facilities

    International Nuclear Information System (INIS)

    Bobyshev, A; DeMar, P; Grigaliunas, V; Bigrow, J; Hoeft, B; Reymund, A

    2012-01-01

    The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Our analysis will include examination into the following areas: • Evolution of Tier1 centers to their current state • Evolving data center networking models and how they apply to Tier1 centers • Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers • Trends in WAN data movement and emergence of software-defined WAN network capabilities • Network virtualization

  3. Status and trends in networking at LHC Tier1 facilities

    Energy Technology Data Exchange (ETDEWEB)

    Bobyshev, A. [Fermilab; DeMar, P. [Fermilab; Grigaliunas, V. [Fermilab; Bigrow, J. [Brookhaven; Hoeft, B. [KIT, Karlsruhe; Reymund, A. [KIT, Karlsruhe

    2012-06-22

    The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Our analysis will include examination into the following areas: Evolution of Tier1 centers to their current state Evolving data center networking models and how they apply to Tier1 centers Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers Trends in WAN data movement and emergence of software-defined WAN network capabilities Network virtualization

  4. 42 CFR 460.28 - Notice of CMS determination on waiver requests.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Notice of CMS determination on waiver requests. 460... CMS determination on waiver requests. (a) Time limit for notification of determination. Within 90 days after receipt of a waiver request, CMS takes one of the following actions: (1) Approves the request. (2...

  5. 42 CFR 411.380 - When CMS issues a formal advisory opinion.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false When CMS issues a formal advisory opinion. 411.380... Relationships Between Physicians and Entities Furnishing Designated Health Services § 411.380 When CMS issues a formal advisory opinion. (a) CMS considers an advisory opinion to be issued once it has received payment...

  6. The ATLAS Tier-0 Overview and operational experience

    CERN Document Server

    Elsing, M; Nairz, A; Negri, G

    2010-01-01

    Within the ATLAS hierarchical, multi-tier computing infrastructure, the Tier-0 centre at CERN is mainly responsible for prompt processing of the raw data coming from the online DAQ system, to archive the raw and derived data on tape, to register the data with the relevant catalogues and to distribute them to the associated Tier-1 centers. The Tier-0 is already fully functional. It has been successfully participating in all cosmic and commissioning data taking since May 2007, and was ramped up to its foreseen full size, performance and throughput for the cosmic (and short single-beam) run periods between July and October 2008. Data and work flows for collision data taking were exercised in several "Full Dress Rehearsals" (FDRs) in the course of 2008. The transition from an expert to a shifter-based system was successfully established in July 2008. This article will give an overview of the Tier-0 system, its data and work flows, and operations model. It will review the operational experience gained in cosmic, c...

  7. Systementwicklungen und Messungen zur Auslese und Kalibration von CMS Pipeline Chips für die angewandte Forschung und Serientests an CMS Streifendetektoren

    CERN Document Server

    Petertill, Markus

    2001-01-01

    The future 14 TeV proton-proton accelerator LHC at CERN serves for the CMS experiment as a high rate source of deep inelastic interactions of quarks and gluons. CMS at the LHC will be one of the "discovery machines" for new particles and theories. The central tracker in the superconducting 4 T-magnet of CMS has to ensure a precise track reconstruction in the space-time. Part I leads to the major tasks of the central tracker for the purpose of preparing the main points of the thesis. In CMS one has to cope with particle fluences of about 10^6cm^-2s^-1 and L1 trigger rates of 100 kHz. System developments have lead to a powerful data acquisition system (DAQ) constructed in VME for emulation of the hardware algorithms in the frontend driver and for research of the properties of CMS microstrip detectors. The experience and the results point to special problems for the operation of the CMS tracker. For most of them solutions will be found which can be emulated in the DAQ or simulated with offline data. If possible ...

  8. Photos from the CMS Photo Book

    CERN Multimedia

    Boreham, S

    2008-01-01

    Photos from the CMS Photo Book. Activities at Point 5 in Cessy, France, between 1998 - 2008. Images of assembly and Installation of the CMS detector: - Civil Engineering - Assembly in the Surface Building - Lowering of the Heavy Elements - Installing and connecting the CMS detector in the underground experiment These images illustrate the assembly, installation and commissioning of the CMS detector. They cover the activities at Point 5 in Cessy, France, between 1998 and 2008. CMS is one of the most complex scientific instruments ever built. It has taken about 20 years to go from conceptual design to the completion of construction of the CMS detector for the LHC start-up in September 2008. Accomplishing this has required the talents, efforts and resources of over 2500 scientists and engineers from about 180 institutions in 38 countries. caverns Compiled by: S. Cittolin, F. Marcastel and T.S. Virdee

  9. The CMS Beam Halo Monitor electronics

    International Nuclear Information System (INIS)

    Tosi, N.; Fabbri, F.; Montanari, A.; Torromeo, G.; Dabrowski, A.E.; Orfanelli, S.; Grassi, T.; Hughes, E.; Mans, J.; Rusack, R.; Stifter, K.; Stickland, D.P.

    2016-01-01

    The CMS Beam Halo Monitor has been successfully installed in the CMS cavern in LHC Long Shutdown 1 for measuring the machine induced background for LHC Run II. The system is based on 40 detector units composed of synthetic quartz Cherenkov radiators coupled to fast photomultiplier tubes (PMTs). The readout electronics chain uses many components developed for the Phase 1 upgrade to the CMS Hadronic Calorimeter electronics, with dedicated firmware and readout adapted to the beam monitoring requirements. The PMT signal is digitized by a charge integrating ASIC (QIE10), providing both the signal rise time, with few nanosecond resolution, and the charge integrated over one bunch crossing. The backend electronics uses microTCA technology and receives data via a high-speed 5 Gbps asynchronous link. It records histograms with sub-bunch crossing timing resolution and is read out via IPbus using the newly designed CMS data acquisition for non-event based data. The data is processed in real time and published to CMS and the LHC, providing online feedback on the beam quality. A dedicated calibration monitoring system has been designed to generate short triggered pulses of light to monitor the efficiency of the system. The electronics has been in operation since the first LHC beams of Run II and has served as the first demonstration of the new QIE10, Microsemi Igloo2 FPGA and high-speed 5 Gbps link with LHC data

  10. First results on RB2 muon barrel RPC detector for CMS

    Energy Technology Data Exchange (ETDEWEB)

    Abbrescia, M.; Altieri, S.; Belli, G.; Bruno, G.; Colaleo, A. E-mail: anna.colaleo@cern.ch; Guida, R.; Iaselli, G.; Loddo, F.; Maggi, M.; Marangelli, B.; Natali, S.; Nuzzo, S.; Pugliese, G.; Ranieri, A.; Ratti, S.P.; Riccardi, C.; Romano, F.; Torre, P.; Vanini, S.; Vitulo, P

    2003-08-01

    The first CMS MB2 station, with one RPC and one DT module, has been tested with a muon beam under a high intensity photon flux at the CERN Gamma Irradiation Facility during the Autumn 2001 test. Results on efficiency, rate capability, cluster size and spatial resolution, for the RPC detector, are reported here. Studies with a small percentage of SF{sub 6} in the gas mixture, in order to decrease the noise rate, have also been carried out.

  11. A Novel Amphibian Tier 2 Testing Protocol: A 30-Week Exposure of Xenopus Tropicalis to the Antiandrogen Flutamide

    National Research Council Canada - National Science Library

    Knechtges, Paul L; Sprando, Robert L; Porter, Karen L; Brennan, Linda M; Miller, Mark F; Kumsher, David M; Dennis, William E; Brown, Charles C; Clegg Paul L. Knechtges. Robert L. Sprando. Karen L. Potter., Eric D

    2007-01-01

    .... For that reason, a tier 2 testing protocol using Xenopus (Silurana) tropicalis and a 30-week, flow-through exposure to the antiandrogen flutamide from stage 46 tadpoles through sexually mature adult frogs were developed and evaluated in this pilot study...

  12. CMS Higgs boson results

    CERN Document Server

    Bluj, Michal Jacek

    2018-01-01

    In this report we review recent Higgs boson results obtained with pp collisions at $\\sqrt{s}=\\,$13 TeV recorded by the CMS detector in 2016 for an integrated luminosity of 35.9fb$^{\\text{-1}}$. The 2016 data allowed the observation of the $H \\to \\tau\\tau$ and $H \\to WW$ decays with high significance. We also present a combined measurement based on a full set of CMS analyses performed with 2016 data. These results are compatible with the standard model predictions with precision of several measurements exceeding results from combination of ATLAS and CMS data collected in 2011 and 2012.

  13. 47 CFR 76.1605 - New product tier.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false New product tier. 76.1605 Section 76.1605 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Notices § 76.1605 New product tier. (a) Within 30 days of the offering of an...

  14. 26 CFR 1.444-4 - Tiered structure.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 6 2010-04-01 2010-04-01 false Tiered structure. 1.444-4 Section 1.444-4 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Accounting Periods § 1.444-4 Tiered structure. (a) Electing small business trusts. For...

  15. Test beam results of the first CMS double-sided strip module prototypes using the CBC2 read-out chip

    Energy Technology Data Exchange (ETDEWEB)

    Harb, Ali, E-mail: ali.harb@desy.de; Mussgiller, Andreas; Hauk, Johannes

    2017-02-11

    The CMS Binary Chip (CBC) is a prototype version of the front-end read-out ASIC to be used in the silicon strip modules of the CMS outer tracking detector during the high luminosity phase of the LHC. The CBC is produced in 130 nm CMOS technology and bump-bonded to the hybrid of a double layer silicon strip module, the so-called 2S-p{sub T} module. It has 254 input channels and is designed to provide on-board trigger information to the first level trigger system of CMS, with the capability of cluster-width discrimination and high-p{sub T} track identification. In November 2013 the first 2S-p{sub T} module prototypes equipped with the CBC chips were put to test at the DESY-II test beam facility. Data were collected exploiting a beam of positrons with an energy ranging from 2 to 4 GeV. In this paper the test setup and the results are presented.

  16. Russian institute receives CMS Gold Award

    CERN Multimedia

    Patrice Loïez

    2003-01-01

    The Snezhinsk All-Russian Institute of Scientific Research for Technical Physics (VNIITF) of the Russian Federal Nuclear Centre (RFNC) is one of twelve CMS suppliers to receive awards for outstanding performance this year. The CMS Collaboration took the opportunity of the visit to CERN of the Director of VNIITF and his deputy to present the CMS Gold Award, which the institute has received for its exceptional performance in the assembly of steel plates for the CMS forward hadronic calorimeter. This calorimeter consists of two sets of 18 wedge-shaped modules arranged concentrically around the beam-pipe at each end of the CMS detector. Each module consists of steel absorber plates with quartz fibres inserted into them. The institute developed a special welding technique to assemble the absorber plates, enabling a high-quality detector to be produced at relatively low cost.RFNC-VNIITF Director Professor Georgy Rykovanov (right), is seen here receiving the Gold Award from Felicitas Pauss, Vice-Chairman of the CMS ...

  17. An algal model for predicting attainment of tiered biological criteria of Maine's streams and rivers

    Science.gov (United States)

    Danielson, Thomas J.; Loftin, Cyndy; Tsomides, Leonidas; DiFranco, Jeanne L.; Connors, Beth; Courtemanch, David L.; Drummond, Francis; Davies, Susan

    2012-01-01

    State water-quality professionals developing new biological assessment methods often have difficulty relating assessment results to narrative criteria in water-quality standards. An alternative to selecting index thresholds arbitrarily is to include the Biological Condition Gradient (BCG) in the development of the assessment method. The BCG describes tiers of biological community condition to help identify and communicate the position of a water body along a gradient of water quality ranging from natural to degraded. Although originally developed for fish and macroinvertebrate communities of streams and rivers, the BCG is easily adapted to other habitats and taxonomic groups. We developed a discriminant analysis model with stream algal data to predict attainment of tiered aquatic-life uses in Maine's water-quality standards. We modified the BCG framework for Maine stream algae, related the BCG tiers to Maine's tiered aquatic-life uses, and identified appropriate algal metrics for describing BCG tiers. Using a modified Delphi method, 5 aquatic biologists independently evaluated algal community metrics for 230 samples from streams and rivers across the state and assigned a BCG tier (1–6) and Maine water quality class (AA/A, B, C, nonattainment of any class) to each sample. We used minimally disturbed reference sites to approximate natural conditions (Tier 1). Biologist class assignments were unanimous for 53% of samples, and 42% of samples differed by 1 class. The biologists debated and developed consensus class assignments. A linear discriminant model built to replicate a priori class assignments correctly classified 95% of 150 samples in the model training set and 91% of 80 samples in the model validation set. Locally derived metrics based on BCG taxon tolerance groupings (e.g., sensitive, intermediate, tolerant) were more effective than were metrics developed in other regions. Adding the algal discriminant model to Maine's existing macroinvertebrate discriminant

  18. 42 CFR 489.53 - Termination by CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Termination by CMS. 489.53 Section 489.53 Public... Reinstatement After Termination § 489.53 Termination by CMS. (a) Basis for termination of agreement with any provider. CMS may terminate the agreement with any provider if CMS finds that any of the following failings...

  19. Jim Virdee, the new spokesperson of CMS

    CERN Multimedia

    2006-01-01

    Jim Virdee and Michel Della Negra. On 21 June Tejinder 'Jim'Virdee was elected by the CMS collaboration as its new spokesperson, his 3-year term of office beginning in January 2007. He will take over from Michel Della Negra, who has been CMS spokesperson since its formalization in 1992. Three distinguished physicists stood as candidates for this election: Dan Green from Fermilab, programme manager of the US-CMS collaboration and coordinator of the CMS Hadron Calorimeter project; Jim Virdee from Imperial College London and CERN, deputy spokesperson of CMS since 1993; Gigi Rolandi from the University of Trieste and CERN, ex-Aleph spokesperson and currently involved in the preparations of the physics analyses to be done with CMS. On the early evening of 21 June, 141 of the 142 members of the CMS collaboration board, some represented by proxies, took part in a secret ballot. After two rounds of voting Jim Virdee was elected as spokesperson with a clear majority. Jim thanked the CMS collaboration 'for putting conf...

  20. 42 CFR 401.625 - Effect of CMS claims collection decisions on appeals.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Effect of CMS claims collection decisions on... Compromise § 401.625 Effect of CMS claims collection decisions on appeals. Any action taken under this..., is not an initial determination for purposes of CMS appeal procedures. ...

  1. Data Scouting in CMS

    CERN Document Server

    Anderson, Dustin James

    2016-01-01

    In 2011, the CMS collaboration introduced Data Scouting as a way to produce physics results with events that cannot be stored on disk, due to resource limits in the data acquisition and offline infrastructure. The viability of this technique was demonstrated in 2012, when 18 fb$^{-1}$ of collision data at $\\sqrt{s}$ = 8 TeV were collected. The technique is now a standard ingredient of CMS and ATLAS data-taking strategy. In this talk, we present the status of data scouting in CMS and the improvements introduced in 2015 and 2016, which promoted data scouting to a full-fledged, flexible discovery tool for the LHC Run II.

  2. Ian Taylor MBE MP Chairman Parliamentary and Scientific Committee, United Kingdom (second from left) with (from left to right) CMS Technical Coordinator A. Ball, CMS Spokesperson Tejinder (Jim) Virdee and Adviser to the Director-General J. Ellis on 2 November 2009.

    CERN Multimedia

    Maximilien Brice; CMS

    2009-01-01

    Ian Taylor MBE MP Chairman Parliamentary and Scientific Committee, United Kingdom (second from left) with (from left to right) CMS Technical Coordinator A. Ball, CMS Spokesperson Tejinder (Jim) Virdee and Adviser to the Director-General J. Ellis on 2 November 2009.

  3. CMS data and workflow management system

    CERN Document Server

    Fanfani, A; Bacchi, W; Codispoti, G; De Filippis, N; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Silvestris, L; Calzolari, F; Sarkar, S; Spiga, D; Cinquili, M; Lacaprara, S; Biasotto, M; Farina, F; Merlo, M; Belforte, S; Kavka, C; Sala, L; Harvey, J; Hufnagel, D; Fanzago, F; Corvo, M; Magini, N; Rehn, J; Toteva, Z; Feichtinger, D; Tuura, L; Eulisse, G; Bockelman, B; Lundstedt, C; Egeland, R; Evans, D; Mason, D; Gutsche, O; Sexton-Kennedy, L; Dagenhart, D W; Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V; Fisk, I; McBride, P; Bauerdick, L; Bakken, J; Rossman, P; Wicklund, E; Wu, Y; Jones, C; Kuznetsov, V; Riley, D; Dolgert, A; van Lingen, F; Narsky, I; Paus, C; Klute, M; Gomez-Ceballos, G; Piedra-Gomez, J; Miller, M; Mohapatra, A; Lazaridis, C; Bradley, D; Elmer, P; Wildish, T; Wuerthwein, F; Letts, J; Bourilkov, D; Kim, B; Smith, P; Hernandez, J M; Caballero, J; Delgado, A; Flix, J; Cabrillo-Bartolome, I; Kasemann, M; Flossdorf, A; Stadie, H; Kreuzer, P; Khomitch, A; Hof, C; Zeidler, C; Kalini, S; Trunov, A; Saout, C; Felzmann, U; Metson, S; Newbold, D; Geddes, N; Brew, C; Jackson, J; Wakefield, S; De Weirdt, S; Adler, V; Maes, J; Van Mulders, P; Villella, I; Hammad, G; Pukhaeva, N; Kurca, T; Semneniouk, I; Guan, W; Lajas, J A; Teodoro, D; Gregores, E; Baquero, M; Shehzad, A; Kadastik, M; Kodolova, O; Chao, Y; Ming Kuo, C; Filippidis, C; Walzel, G; Han, D; Kalinowski, A; Giro de Almeida, N M; Panyam, N

    2008-01-01

    CMS expects to manage many tens of peta bytes of data to be distributed over several computing centers around the world. The CMS distributed computing and analysis model is designed to serve, process and archive the large number of events that will be generated when the CMS detector starts taking data. The underlying concepts and the overall architecture of the CMS data and workflow management system will be presented. In addition the experience in using the system for MC production, initial detector commissioning activities and data analysis will be summarized.

  4. Retrospective on the Seniors' Council Tier 1 LDRD portfolio.

    Energy Technology Data Exchange (ETDEWEB)

    Ballard, William Parker

    2012-04-01

    This report describes the Tier 1 LDRD portfolio, administered by the Seniors Council between 2003 and 2011. 73 projects were sponsored over the 9 years of the portfolio at a cost of $10.5 million which includes $1.9M of a special effort in directed innovation targeted at climate change and cyber security. Two of these Tier 1 efforts were the seeds for the Grand Challenge LDRDs in Quantum Computing and Next Generation Photovoltaic conversion. A few LDRDs were terminated early when it appeared clear that the research was not going to succeed. A great many more were successful and led to full Tier 2 LDRDs or direct customer sponsorship. Over a dozen patents are in various stages of prosecution from this work, and one project is being submitted for an R and D 100 award.

  5. Three-year summary report of biological monitoring at the Southwest Ocean dredged-material disposal site and additional locations off Grays Harbor, Washington, 1990--1992

    Energy Technology Data Exchange (ETDEWEB)

    Antrim, L.D.; Shreffler, D.K.; Pearson, W.H.; Cullinan, V.I. [Battelle Marine Research Lab., Sequim, WA (United States)

    1992-12-01

    The Grays Harbor Navigation Improvement Project was initiated to improve navigation by widening and deepening the federal channel at Grays Harbor. Dredged-material disposal sites were selected after an extensive review process that included inter-agency agreements, biological surveys, other laboratory and field studies, and preparation of environmental impact statements The Southwest Site, was designated to receive materials dredged during annual maintenance dredging as well as the initial construction phase of the project. The Southwest Site was located, and the disposal operations designed, primarily to avoid impacts to Dungeness crab. The Final Environmental Impact Statement Supplement for the project incorporated a Site Monitoring Plan in which a tiered approach to disposal site monitoring was recommended. Under Tier I of the Site Monitoring Plan, Dungeness crab densities are monitored to confirm that large aggregations of newly settled Dungeness crab have not moved onto the Southwest Site. Tier 2 entails an increased sampling effort to determine whether a change in disposal operations is needed. Four epibenthic surveys using beam trawls were conducted in 1990, 1991, and 1992 at the Southwest Site and North Reference area, where high crab concentrations were found in the spring of 1985. Survey results during these three years prompted no Tier 2 activities. Epibenthic surveys were also conducted at two nearshore sites where construction of sediment berms has been proposed. This work is summarized in an appendix to this report.

  6. Calibration of the CMS Hadron Calorimeter in Run 2

    CERN Document Server

    Chadeeva, Marina

    2017-01-01

    Various calibration techniques for the CMS Hadron calorimeter in Run2 and the results of calibration using 2016 collision data are presented. The radiation damage corrections, intercalibration of different channels using the phi-symmetry technique for barrel, endcap and forward calorimeter regions are described, as well as the intercalibration with muons of the outer hadron calorimeter. The achieved intercalibration precision is within 3\\%. The {\\it in situ} energy scale calibration is performed in the barrel and endcap regions using isolated charged hadrons and in the forward calorimeter using the Z$\\rightarrow ee$ process. The impact of pileup and the developed technique of correction for pileup is also discussed. The achieved uncertainty of the response to hadrons is 3.4\\% in the barrel and 2.6\\% in the endcap region (at $\\vert \\eta \\vert < 2$) and is dominated by the systematic uncertainty due to pileup contributions.

  7. Calibration of the CMS hadron calorimeter in Run 2

    Science.gov (United States)

    Chadeeva, M.; Lychkovskaya, N.

    2018-03-01

    Various calibration techniques for the CMS Hadron calorimeter in Run 2 and the results of calibration using 2016 collision data are presented. The radiation damage corrections, intercalibration of different channels using the phi-symmetry technique for barrel, endcap and forward calorimeter regions are described, as well as the intercalibration with muons of the outer hadron calorimeter. The achieved intercalibration precision is within 3%. The in situ energy scale calibration is performed in the barrel and endcap regions using isolated charged hadrons and in the forward calorimeter using the Zarrow ee process. The impact of pileup and the developed technique of correction for pileup is also discussed. The achieved uncertainty of the response to hadrons is 3.4% in the barrel and 2.6% in the endcap region (at the pseudorapidity range |η|<2) and is dominated by the systematic uncertainty due to pileup contributions.

  8. The CMS Tracker upgrade for HL-LHC

    CERN Document Server

    Ahuja, Sudha

    2017-01-01

    The LHC machine is planning an upgrade program which will smoothly bring the luminosity to about 5 $\\times$ $10^{34} $cm$^{-2}$s$^{-1}$ in 2028, to possibly reach an integrated luminosity of 3000 fb$^{-1}$ by the end of 2037. This High Luminosity LHC scenario, HL-LHC, will require a preparation program of the LHC detectors known as Phase-2 upgrade. The current CMS Outer Tracker, already running beyond design specifications, and CMS Phase1 Pixel Detector will not be able to survive HL-LHC radiation conditions and CMS will need completely new devices, in order to fully exploit the high-demanding operating conditions and the delivered luminosity. The new Outer Tracker should have also trigger capabilities. To achieve such goals, R$\\&$D activities are ongoing to explore options both for the Outer Tracker, and for the pixel Inner Tracker. Solutions are being developed that would allow including tracking information at Level-1. The design choices for the Tracker upgrades are discussed along with some highlights...

  9. LHCC COMPREHENSIVE REVIEW OF CMS (JULY 07)

    CERN Multimedia

    Extract from the Draft Report 1. EXECUTIVE SUMMARY The CMS Collaboration has made significant progress towards producing a detector ready for LHC operation in 2008. The past year saw all sub-detector groups success fully produce high-quality components and modules, and integrate them into the final objects to be installed into the CMS magnet. Installation and commissioning of final components in the CMS UXC55 cavern are well-under-way. In particular, the heavy lowering of detector elements into the CMS experiment cavern is a major success. The new CMS master schedule V36 incorporates the revised LHC machine schedule and includes an optimized detector sequencing. In spite of various delays, it remains possible that CMS will have an initial detector ready to exploit the initial LHC run in spring 2008. Installation of the Electromagnetic Calorimeter End-Cap (EE) and Pre-shower (ES) detectors is scheduled to be completed no sooner than July 2008 and CMS now plans to install the complete Pixel Detector for ...

  10. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    International Nuclear Information System (INIS)

    Bonacorsi, D; Neri, M; Boccali, T; Giordano, D; Girone, M; Magini, N; Kuznetsov, V; Wildish, T

    2015-01-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for

  11. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    Science.gov (United States)

    Bonacorsi, D.; Boccali, T.; Giordano, D.; Girone, M.; Neri, M.; Magini, N.; Kuznetsov, V.; Wildish, T.

    2015-12-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for

  12. ATLAS, CMS and new challenges for public communication

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Lucas [Fermilab; Barney, David [CERN; Goldfarb, Steven [Michigan U.

    2011-01-01

    On 30 March 2010 the first high-energy collisions brought the LHC experiments into the era of research and discovery. Millions of viewers worldwide tuned in to the webcasts and followed the news via Web 2.0 tools, such as blogs, Twitter, and Facebook, with 205,000 unique visitors to CERN's Web site. Media coverage at the experiments and in institutes all over the world yielded more than 2,200 news items including 800 TV broadcasts. We describe the new multimedia communications challenges, due to the massive public interest in the LHC programme, and the corresponding responses of the ATLAS and CMS experiments, in the areas of Web 2.0 tools, multimedia, webcasting, videoconferencing, and collaborative tools. We discuss the strategic convergence of the two experiments' communications services, information systems and public database of outreach material.

  13. ATLAS, CMS and New Challenges for Public Communication

    Science.gov (United States)

    Taylor, Lucas; Barney, David; Goldfarb, Steven

    2011-12-01

    On 30 March 2010 the first high-energy collisions brought the LHC experiments into the era of research and discovery. Millions of viewers worldwide tuned in to the webcasts and followed the news via Web 2.0 tools, such as blogs, Twitter, and Facebook, with 205,000 unique visitors to CERN's Web site. Media coverage at the experiments and in institutes all over the world yielded more than 2,200 news items including 800 TV broadcasts. We describe the new multimedia communications challenges, due to the massive public interest in the LHC programme, and the corresponding responses of the ATLAS and CMS experiments, in the areas of Web 2.0 tools, multimedia, webcasting, videoconferencing, and collaborative tools. We discuss the strategic convergence of the two experiments' communications services, information systems and public database of outreach material.

  14. ATLAS, CMS and New Challenges for Public Communication

    International Nuclear Information System (INIS)

    Taylor, Lucas; Barney, David; Goldfarb, Steven

    2011-01-01

    On 30 March 2010 the first high-energy collisions brought the LHC experiments into the era of research and discovery. Millions of viewers worldwide tuned in to the webcasts and followed the news via Web 2.0 tools, such as blogs, Twitter, and Facebook, with 205,000 unique visitors to CERN's Web site. Media coverage at the experiments and in institutes all over the world yielded more than 2,200 news items including 800 TV broadcasts. We describe the new multimedia communications challenges, due to the massive public interest in the LHC programme, and the corresponding responses of the ATLAS and CMS experiments, in the areas of Web 2.0 tools, multimedia, webcasting, videoconferencing, and collaborative tools. We discuss the strategic convergence of the two experiments' communications services, information systems and public database of outreach material.

  15. ATLAS, CMS and New Challenges for Public Communication

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Lucas [Fermilab, PO Box 500, Batavia, IL 60510-5011 (United States); Barney, David [CERN, CH-1211, Geneva 23 (Switzerland); Goldfarb, Steven [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)

    2011-12-23

    On 30 March 2010 the first high-energy collisions brought the LHC experiments into the era of research and discovery. Millions of viewers worldwide tuned in to the webcasts and followed the news via Web 2.0 tools, such as blogs, Twitter, and Facebook, with 205,000 unique visitors to CERN's Web site. Media coverage at the experiments and in institutes all over the world yielded more than 2,200 news items including 800 TV broadcasts. We describe the new multimedia communications challenges, due to the massive public interest in the LHC programme, and the corresponding responses of the ATLAS and CMS experiments, in the areas of Web 2.0 tools, multimedia, webcasting, videoconferencing, and collaborative tools. We discuss the strategic convergence of the two experiments' communications services, information systems and public database of outreach material.

  16. The CMS Magnetic Field Map Performance

    CERN Document Server

    Klyukhin, V.I.; Andreev, V.; Ball, A.; Cure, B.; Herve, A.; Gaddi, A.; Gerwig, H.; Karimaki, V.; Loveless, R.; Mulders, M.; Popescu, S.; Sarycheva, L.I.; Virdee, T.

    2010-04-05

    The Compact Muon Solenoid (CMS) is a general-purpose detector designed to run at the highest luminosity at the CERN Large Hadron Collider (LHC). Its distinctive featuresinclude a 4 T superconducting solenoid with 6 m diameter by 12.5 m long free bore, enclosed inside a 10000-ton return yoke made of construction steel. Accurate characterization of the magnetic field everywhere in the CMS detector is required. During two major tests of the CMS magnet the magnetic flux density was measured inside the coil in a cylinder of 3.448 m diameter and 7 m length with a specially designed field-mapping pneumatic machine as well as in 140 discrete regions of the CMS yoke with NMR probes, 3-D Hall sensors and flux-loops. A TOSCA 3-D model of the CMS magnet has been developed to describe the magnetic field everywhere outside the tracking volume measured with the field-mapping machine. A volume based representation of the magnetic field is used to provide the CMS simulation and reconstruction software with the magnetic field ...

  17. Guido Tonelli elected next CMS spokesperson

    CERN Multimedia

    2009-01-01

    Guido Tonelli has been elected as the next CMS spokesperson. He will take over from Jim Virdee on January 1, 2010, and will head the collaboration through the first crucial year of data-taking. Guido Tonelli, CMS spokesperson-elect, into the CMS cavern. "It will be very tough and there will be enormous pressure," explains Guido Tonelli, CMS spokesperson-elect. "It will be the first time that CMS will run for a whole year so it is important to go through the checklist to be able to take good quality data." Tonelli, who is currently CMS Deputy spokesperson, will take over from Jim Virdee on January 1, 2010 – only a few months into CMS’s first full year of data-taking. "The collisions will probably be different to our expectations. So it’s going to take the effort of the entire collaboration worldwide to be ready for this new phase." Born in Italy, Tonelli originally studied at the University of Pisa, where he is now a Professo...

  18. Distributed computing grid experiences in CMS

    CERN Document Server

    Andreeva, Julia; Barrass, T; Bonacorsi, D; Bunn, Julian; Capiluppi, P; Corvo, M; Darmenov, N; De Filippis, N; Donno, F; Donvito, G; Eulisse, G; Fanfani, A; Fanzago, F; Filine, A; Grandi, C; Hernández, J M; Innocente, V; Jan, A; Lacaprara, S; Legrand, I; Metson, S; Newbold, D; Newman, H; Pierro, A; Silvestris, L; Steenberg, C; Stockinger, H; Taylor, Lucas; Thomas, M; Tuura, L; Van Lingen, F; Wildish, Tony

    2005-01-01

    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data- taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure ...

  19. Impact of multi-tiered pharmacy benefits on attitudes of plan members with chronic disease states.

    Science.gov (United States)

    Nair, Kavita V; Ganther, Julie M; Valuck, Robert J; McCollum, Marianne M; Lewis, Sonya J

    2002-01-01

    To evaluate the effects of 2- and 3-tiered pharmacy benefit plans on member attitudes regarding their pharmacy benefits. We performed a mail survey and cross-sectional comparison of the outcome variables in a large managed care population in the western United States. Participants were persons with chronic disease states who were in 2- or 3-tier copay drug plans. A random sample of 10,662 was selected from a total of 25,008 members who had received 2 or more prescriptions for a drug commonly used to treat one of 5 conditions: hypertension, diabetes, dyslipidemia, gastroesophageal reflux disease (GERD), or arthritis. Statistical analysis included bivariate comparisons and regression analysis of the factors affecting member attitudes, including satisfaction, loyalty, health plan choices, and willingness to pay a higher out-of-pocket cost for medications. A response rate of 35.8% was obtained from continuously enrolled plan members. Respondents were older, sicker, and consumed more prescriptions than nonrespondents. There were significant differences in age and health plan characteristics between 2- and 3-tier plan members: respondents aged 65 or older represented 11.7% of 2-tier plan members and 54.7% of 3-tier plan members, and 10.0% of 2-tier plan members were in Medicare+Choice plans versus 61.4% in Medicare+Choice plans for 3-tier plan members (Pbrand-name medications, in general, they were not willing to pay more than 10 dollars (in addition to their copayment amount) for these medications. Older respondents and sicker individuals (those with higher scores on the Chronic Disease Indicator) appeared to have more positive attitudes toward their pharmacy benefit plans in general. Higher reported incomes by respondents were also associated with greater satisfaction with prescription drug coverage and increased loyalty toward the pharmacy benefit plan. Conversely, the more individuals spent for either their health care or prescription medications, the less satisfied

  20. 42 CFR 417.801 - Agreements between CMS and health care prepayment plans.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Agreements between CMS and health care prepayment... CMS and health care prepayment plans. (a) General requirement. (1) In order to participate and receive... written agreement with CMS. (2) An existing group practice prepayment plan (GPPP) that continues as an...

  1. CMS Status

    International Nuclear Information System (INIS)

    Dobrzynski, L.

    2007-01-01

    The status of the construction and installation of CMS detector is reviewed. The 4T magnet is cold since end of February 2006. Its commissioning up to the nominal field started in July 2006 allowing a Cosmic Challenge in which elements of the final detector are involved. All big mechanical pieces equipped with muons chambers have been assembled in the surface hall SX5. Since mid July the detector is closed with commissioned HCAL, two ECAL supermodules and representative elements of the silicon tracker. The trigger system as well as the DAQ are tested. After the achievement of the physics TDR, CMS is now ready for the promising signal hunting. (author)

  2. Faces of CMS: Photomosaic (September 2013, low-resolution)

    CERN Multimedia

    Antonelli, Jamie

    2013-01-01

    The "Faces of CMS" photomosaic project aims to show the human element of the CMS Experiment. Most of the images for public outreach show the experimental equipment of CMS or physics results and collision displays. With a collaboration of around 3,000 people scattered around the globe, it's difficult to present the members of CMS in any one image. We asked any interested CMS members to sign up for the project, and allow us to use their photographs. The resulting photo mosaic contains the faces of 1,271 CMS members.

  3. New Management for CMS

    CERN Document Server

    CERN Bulletin

    2010-01-01

    As of January 2010, Guido Tonelli becomes the new CMS Spokesperson with a two-year term of office. A Professor of General Physics at the University of Pisa, Italy, and a CERN Staff Member since January 2010, Tonelli had already been appointed as Deputy Spokesperson under the previous management. He has taken over from Jim Virdee, who was CMS Spokesperson from January 2007 to December 2009. Guido Tonelli, new CMS spokesperson At the same time as Tonelli becomes Spokesperson, two new Deputies, Albert De Roeck and Joe Incandela, as well as a whole new set of Coordinators, are also starting their terms of office. ”With the first data-taking run we have shown that CMS is an excellent experiment. The next challenge will be to transform CMS into a discovery machine with a view to making it synonymous with scientific excellence. This will be very tough but, again, the winning element will be the focus and coherent effort of the whole collaboration. On my side I'll do my best but I will need...

  4. Calorimeter Simulation with Hadrons in CMS

    Energy Technology Data Exchange (ETDEWEB)

    Piperov, Stefan; /Sofiya, Inst. Nucl. Res. /Fermilab

    2008-11-01

    CMS is using Geant4 to simulate the detector setup for the forthcoming data from the LHC. Validation of physics processes inside Geant4 is a major concern in view of getting a proper description of jets and missing energy for signal and background events. This is done by carrying out an extensive studies with test beam using the prototypes or real detector modules of the CMS calorimeter. These data are matched with Geant4 predictions using the same framework that is used for the entire CMS detector. Tuning of the Geant4 models is carried out and steps to be used in reproducing detector signals are defined in view of measurements of energy response, energy resolution, transverse and longitudinal shower profiles for a variety of hadron beams over a broad energy spectrum between 2 to 300 GeV/c. The tuned Monte Carlo predictions match many of these measurements within systematic uncertainties.

  5. CMS Results of Grid-related activities using the early deployed LCG Implementations

    CERN Document Server

    Coviello, Tommaso; De Filippis, Nicola; Donvito, Giacinto; Maggi, Giorgio; Pierro, A; Bonacorsi, Daniele; Capiluppi, Paolo; Fanfani, Alessandra; Grandi, Claudio; Maroney, Owen; Nebrensky, H; Donno, Flavia; Jank, Werner; Sciabà, Andrea; Sinanis, Nick; Colling, David; Tallini, Hugh; MacEvoy, Barry C; Wang, Shaowen; Kaiser, Joseph; Osman, Asif; Charlot, Claude; Semenjouk, I; Biasotto, Massimo; Fantinel, Sergio; Corvo, Marco; Fanzago, Federica; Mazzucato, Mirco; Verlato, Marco; Go, Apollo; Khan Chia Ming; Andreozzi, S; Cavalli, A; Ciaschini, V; Ghiselli, A; Italiano, A; Spataro, F; Vistoli, C; Tortone, G

    2004-01-01

    The CMS Experiment is defining its Computing Model and is experimenting and testing the new distributed features offered by many Grid Projects. This report describes use by CMS of the early-deployed systems of LCG (LCG-0 and LCG-1). Most of the used features here discussed came from the EU implemented middleware, even if some of the tested capabilities were in common with the US developed middleware. This report describes the simulation of about 2 million of CMS detector events, which were generated as part of the official CMS Data Challenge 04 (Pre-Challenge-Production). The simulations were done on a CMS-dedicated testbed (CMS-LCG-0), where an ad-hoc modified version of the LCG-0 middleware was deployed and where the CMS Experiment had a complete control, and on the official early LCG delivered system (with the LCG-1 version). Modifications to the CMS simulation tools for events produc tion where studied and achieved, together with necessary adaptations of the middleware services. Bilateral feedback (betwee...

  6. The ATLAS Tier-0: Overview and operational experience

    International Nuclear Information System (INIS)

    Elsing, Markus; Goossens, Luc; Nairz, Armin; Negri, Guido

    2010-01-01

    Within the ATLAS hierarchical, multi-tier computing infrastructure, the Tier-0 centre at CERN is mainly responsible for prompt processing of the raw data coming from the online DAQ system, to archive the raw and derived data on tape, to register the data with the relevant catalogues and to distribute them to the associated Tier-1 centers. The Tier-0 is already fully functional. It has been successfully participating in all cosmic and commissioning data taking since May 2007, and was ramped up to its foreseen full size, performance and throughput for the cosmic (and short single-beam) run periods between July and October 2008. Data and work flows for collision data taking were exercised in several 'Full Dress Rehearsals' (FDRs) in the course of 2008. The transition from an expert to a shifter-based system was successfully established in July 2008. This article will give an overview of the Tier-0 system, its data and work flows, and operations model. It will review the operational experience gained in cosmic, commissioning, and FDR exercises during the past year. And it will give an outlook on planned developments and the evolution of the system towards first collision data taking expected now in late Autumn 2009.

  7. The ATLAS Tier-0: Overview and operational experience

    Science.gov (United States)

    Elsing, Markus; Goossens, Luc; Nairz, Armin; Negri, Guido

    2010-04-01

    Within the ATLAS hierarchical, multi-tier computing infrastructure, the Tier-0 centre at CERN is mainly responsible for prompt processing of the raw data coming from the online DAQ system, to archive the raw and derived data on tape, to register the data with the relevant catalogues and to distribute them to the associated Tier-1 centers. The Tier-0 is already fully functional. It has been successfully participating in all cosmic and commissioning data taking since May 2007, and was ramped up to its foreseen full size, performance and throughput for the cosmic (and short single-beam) run periods between July and October 2008. Data and work flows for collision data taking were exercised in several "Full Dress Rehearsals" (FDRs) in the course of 2008. The transition from an expert to a shifter-based system was successfully established in July 2008. This article will give an overview of the Tier-0 system, its data and work flows, and operations model. It will review the operational experience gained in cosmic, commissioning, and FDR exercises during the past year. And it will give an outlook on planned developments and the evolution of the system towards first collision data taking expected now in late Autumn 2009.

  8. The Impact of Payment System Design on Tiering Incentives

    OpenAIRE

    Robert Arculus; Jennifer Hancock; Greg Moran

    2012-01-01

    Tiering occurs when an institution does not participate directly in the central payment system but instead settles its payments through an agent. A high level of tiering can be a significant issue for payment system regulators because of the increased credit and concentration risk. This paper explores the impact of payment system design on institutions' incentives to tier using simulation analysis. Some evidence is found to support the hypothesis that the liquidity-saving mechanisms in Austra...

  9. Last crystals for the CMS chandelier

    CERN Multimedia

    2008-01-01

    In March, the last crystals for CMS’s electromagnetic calorimeter arrived from Russia and China. Like dedicated jewellers crafting an immense chandelier, the CMS ECAL collaborators are working extremely hard to install all the crystals before the start-up of the LHC. One of the last CMS end-cap crystals, complete with identification bar code. Lead tungstate crystals mounted onto one section of the CMS ECAL end caps. Nearly 10 years after the first production crystal arrived at CERN in September 1998, the very last shipment has arrived. These final crystals will be used to complete the end-caps of the electromagnetic calorimeter (ECAL) at CMS. All in all, there are more than 75,000 crystals in the ECAL. The huge quantity of CMS lead tungstate crystals used in the ECAL corresponds to the highest volume ever produced for a single experiment. The excellent quality of the crystals, both in ter...

  10. Exclusive Production at CMS

    CERN Document Server

    Walczak, Marek

    2016-01-01

    I briefly introduce so-called central exclusive production. I mainly focus on the example analyses that have been performed in the CMS experiment at CERN. I conclude with ideas and perspectives for future work that will be done during Run 2 of the LHC. I pay special attention to the ultraperipheral collisions.

  11. Multiple tier fuel cycle studies for waste transmutation

    International Nuclear Information System (INIS)

    Hill, R.N.; Taiwo, T.A.; Stillman, J.A.; Graziano, D.J.; Bennett, D.R.; Trellue, H.; Todosow, M.; Halsey, W.G.; Baxter, A.

    2002-01-01

    As part of the U.S. Department of Energy Advanced Accelerator Applications Program, a systems study was conducted to evaluate the transmutation performance of advanced fuel cycle strategies. Three primary fuel cycle strategies were evaluated: dual-tier systems with plutonium separation, dual-tier systems without plutonium separation, and single-tier systems without plutonium separation. For each case, the system mass flow and TRU consumption were evaluated in detail. Furthermore, the loss of materials in fuel processing was tracked including the generation of new waste streams. Based on these results, the system performance was evaluated with respect to several key transmutation parameters including TRU inventory reduction, radiotoxicity, and support ratio. The importance of clean fuel processing (∼0.1% losses) and inclusion of a final tier fast spectrum system are demonstrated. With these two features, all scenarios capably reduce the TRU and plutonium waste content, significantly reducing the radiotoxicity; however, a significant infrastructure (at least 1/10 the total nuclear capacity) is required for the dedicated transmutation system

  12. CMS ready for winding up

    CERN Multimedia

    2003-01-01

    End of October, the last lengths of conductor for the CMS superconducting solenoid have been produced. This is another large sub-project of the CMS Magnet being successfully finished, after completion of the Yoke last year (see Bulletin 43/2002).

  13. New computer system for the Japan Tier-2 center

    CERN Multimedia

    Hiroyuki Matsunaga

    2007-01-01

    The ICEPP (International Center for Elementary Particle Physics) of the University of Tokyo has been operating an LCG Tier-2 center dedicated to the ATLAS experiment, and is going to switch over to the new production system which has been recently installed. The system will be of great help to the exciting physics analyses for coming years. The new computer system includes brand-new blade servers, RAID disks, a tape library system and Ethernet switches. The blade server is DELL PowerEdge 1955 which contains two Intel dual-core Xeon (WoodCrest) CPUs running at 3GHz, and a total of 650 servers will be used as compute nodes. Each of the RAID disks is configured to be RAID-6 with 16 Serial ATA HDDs. The equipment as well as the cooling system is placed in a new large computer room, and both are hooked up to UPS (uninterruptible power supply) units for stable operation. As a whole, the system has been built with redundant configuration in a cost-effective way. The next major upgrade will take place in thre...

  14. CMS Comic Book

    CERN Document Server

    Gill, Karl Aaron

    2006-01-01

    Titled "CMS Particle Hunter," this colorful comic book style brochure explains to young budding scientists and science enthusiasts in colorful animation how the CMS detector was made, its main parts, and what scientists hope to find using this complex tool. Book invites young students to get involved in particle physics themselves to join the adventure. Written by Dave Barney and Aline Guevera. Layout and drawings by Eric Paiharey and Frederic Vignaux. Available in English, French, German, Italian, Spanish and Portuguese. Year Produced: 2006. Update: September 2013.

  15. Comparison of intrapulmonary and systemic pharmacokinetics of colistin methanesulfonate (CMS) and colistin after aerosol delivery and intravenous administration of CMS in critically ill patients.

    Science.gov (United States)

    Boisson, Matthieu; Jacobs, Matthieu; Grégoire, Nicolas; Gobin, Patrice; Marchand, Sandrine; Couet, William; Mimoz, Olivier

    2014-12-01

    Colistin is an old antibiotic that has recently gained a considerable renewal of interest for the treatment of pulmonary infections due to multidrug-resistant Gram-negative bacteria. Nebulization seems to be a promising form of administration, but colistin is administered as an inactive prodrug, colistin methanesulfonate (CMS); however, differences between the intrapulmonary concentrations of the active moiety as a function of the route of administration in critically ill patients have not been precisely documented. In this study, CMS and colistin concentrations were measured on two separate occasions within the plasma and epithelial lining fluid (ELF) of critically ill patients (n = 12) who had received 2 million international units (MIU) of CMS by aerosol delivery and then intravenous administration. The pharmacokinetic analysis was conducted using a population approach and completed by pharmacokinetic-pharmacodynamic (PK-PD) modeling and simulations. The ELF colistin concentrations varied considerably (9.53 to 1,137 mg/liter), but they were much higher than those in plasma (0.15 to 0.73 mg/liter) after aerosol delivery but not after intravenous administration of CMS. Following CMS aerosol delivery, typically, 9% of the CMS dose reached the ELF, and only 1.4% was presystemically converted into colistin. PK-PD analysis concluded that there was much higher antimicrobial efficacy after CMS aerosol delivery than after intravenous administration. These new data seem to support the use of aerosol delivery of CMS for the treatment of pulmonary infections in critical care patients. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  16. CMS Centres Worldwide - a New Collaborative Infrastructure

    International Nuclear Information System (INIS)

    Taylor, Lucas

    2011-01-01

    The CMS Experiment at the LHC has established a network of more than fifty inter-connected 'CMS Centres' at CERN and in institutes in the Americas, Asia, Australasia, and Europe. These facilities are used by people doing CMS detector and computing grid operations, remote shifts, data quality monitoring and analysis, as well as education and outreach. We present the computing, software, and collaborative tools and videoconferencing systems. These include permanently running 'telepresence' video links (hardware-based H.323, EVO and Vidyo), Webcasts, and generic Web tools such as CMS-TV for broadcasting live monitoring and outreach information. Being Web-based and experiment-independent, these systems could easily be extended to other organizations. We describe the experiences of using CMS Centres Worldwide in the CMS data-taking operations as well as for major media events with several hundred TV channels, radio stations, and many more press journalists simultaneously around the world.

  17. 75 FR 7426 - Tier 2 Light-Duty Vehicle and Light-Duty Truck Emission Standards and Gasoline Sulfur Control...

    Science.gov (United States)

    2010-02-19

    ... 2060-AI23; 2060-AQ12 Tier 2 Light-Duty Vehicle and Light-Duty Truck Emission Standards and Gasoline.... The rulemaking also required oil refiners to limit the sulfur content of the gasoline they produce. Sulfur in gasoline has a detrimental impact on catalyst performance and the sulfur requirements have...

  18. CBC2: A CMS microstrip readout ASIC with logic for track-trigger modules at HL-LHC

    Energy Technology Data Exchange (ETDEWEB)

    Hall, G., E-mail: g.hall@imperial.ac.uk [Blackett Laboratory, Imperial College, London SW7 2AZ (United Kingdom); Pesaresi, M.; Raymond, M. [Blackett Laboratory, Imperial College, London SW7 2AZ (United Kingdom); Braga, D.; Jones, L.; Murray, P.; Prydderch, M. [Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 OQX (United Kingdom); Abbaneo, D.; Blanchot, G.; Honma, A.; Kovacs, M.; Vasey, F. [CERN, CH-1211, Geneva (Switzerland)

    2014-11-21

    The CBC2 is the latest version of the CMS Binary Chip ASIC for readout of the upgraded CMS Tracker at the High Luminosity LHC. It is designed in 130 nm CMOS with 254 input channels and will be bump-bonded to a substrate to which sensors will be wire-bonded. The CBC2 is designed to instrument double layer modules, consisting of two overlaid silicon microstrip sensors with aligned microstrips, in the outer tracker. It incorporates logic to identify L1 trigger primitives in the form of “stubs”: high transverse-momentum track candidates which are identified within the low momentum background by selecting correlated hits between two closely separated microstrip sensors. The first prototype modules have been assembled. The performance of the chip in recent laboratory tests is briefly reported and the status of module construction described.

  19. The CMS Outer Tracker for HL-LHC

    CERN Document Server

    Dierlamm, Alexander Hermann

    2018-01-01

    The LHC is planning an upgrade program, which will bring the luminosity to about $5-7\\times10^{34}$~cm$^{-2}$s$^{-1}$ in 2026, with a goal of an integrated luminosity of 3000 fb$^{-1}$ by the end of 2037. This High Luminosity LHC scenario, HL-LHC, will require a preparation program of the LHC detectors known as Phase-2 Upgrade. The current CMS Tracker is already running beyond design specifications and will not be able to cope with the HL-LHC radiation conditions. CMS will need a completely new Tracker in order to fully exploit the highly demanding operating conditions and the delivered luminosity. The new Outer Tracker system is designed to provide robust tracking as well as Level-1 trigger capabilities using closely spaced modules composed of silicon macro-pixel and/or strip sensors. Research and development activities are ongoing to explore options and develop module components and designs for the HL-LHC environment. The design choices for the CMS Outer Tracker Upgrade are discussed along with some highlig...

  20. Readiness of the ATLAS Spanish Federated Tier-2 for the Physics Analysis of the early collision events at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, E; Amoros, G; Fassi, F; Fernandez, A; Gonzalez, S; Kaci, M; Lamas, A; Salt, J; Sanchez, J [Instituto de Fisica Corpuscular (IFIC) (centro mixto CSIC - University Valencia), E-46071 Valencia (Spain); Nadal, J; Borrego, C; Campos, M; Pacheco, A [Institut de Fisica d' Altes Energies (IFAE) Facultat de Ciencies UAB, E-08193 Bellaterra, Barcelona (Spain); Pardo, J; Del Cano, L; Peso, J Del; Fernandez, P; March, L; Munoz, L [Universidad Autonoma de Madrid (UAM) Dpto. de Fisica Teorica, 28049 Madrid (Spain); Espinal, X, E-mail: elena.oliver@ific.uv.e [Port d' Informacio CientIfica (PIC) Campus UAB Edifici D E-08193 Bellaterra, Barcelona (Spain)

    2010-04-01

    In this contribution an evaluation of the readiness parameters for the Spanish ATLAS Federated Tier-2 is presented, regarding the ATLAS data taking which is expected to start by the end of the year 2009. Special attention will be paid to the Physics Analysis from different points of view: Data Management, Simulated events Production and Distributed Analysis Tests. Several use cases of Distributed Analysis in GRID infrastructures and local interactive analysis in non-Grid farms, are provided, in order to evaluate the interoperability between both environments, and to compare the different performances. The prototypes for local computing infrastructures for data analysis are described. Moreover, information about a local analysis facilities, called Tier-3, is given.

  1. The CMS Beam Halo Monitor Electronics

    CERN Document Server

    AUTHOR|(CDS)2080684; Fabbri, F.; Grassi, T.; Hughes, E.; Mans, J.; Montanari, A.; Orfanelli, S.; Rusack, R.; Torromeo, G.; Stickland, D.P.; Stifter, K.

    2016-01-01

    The CMS Beam Halo Monitor has been successfully installed in the CMS cavern in LHC Long Shutdown 1 for measuring the machine induced background for LHC Run II. The system is based on 40 detector units composed of synthetic quartz Cherenkov radiators coupled to fast photomultiplier tubes. The readout electronics chain uses many components developed for the Phase 1 upgrade to the CMS Hadronic Calorimeter electronics, with dedicated firmware and readout adapted to the beam monitoring requirements. The PMT signal is digitized by a charge integrating ASIC (QIE10), providing both the signal rise time, with few ns resolution, and the charge integrated over one bunch crossing. The backend electronics uses microTCA technology and receives data via a high-speed 5 Gbps asynchronous link. It records histograms with sub-bunch crossing timing resolution and is readout by IPbus using the newly designed CMS data acquisition for non-event based data. The data is processed in real time and published to CMS and the LHC, providi...

  2. 75 FR 57958 - Solicitation of Written Comments on Draft Tier 2 Strategies/Modules for Inclusion in the “HHS...

    Science.gov (United States)

    2010-09-23

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Solicitation of Written Comments on Draft Tier 2 Strategies/ Modules for Inclusion in the ``HHS Action Plan to Prevent Healthcare- Associated Infections'' AGENCY: Department of Health and Human Services, Office of the Assistant Secretary for Health, Office of...

  3. The construction of the phase 1 upgrade of the CMS pixel detector

    CERN Document Server

    Weber, Hannsjorg Artur

    2017-01-01

    The innermost layers of the original CMS tracker were built out of pixel detectors arranged in three barrel layers and two forward disks in each endcap. The original CMS detector was designed for the nominal instantaneous LHC luminosity of $1\\times10^{34}\\,\\text{cm}^{-2}\\text{s}^{-1}$. Under the conditions expected in the coming years, which will see an increase of a factor two of the instantaneous luminosity, the CMS pixel detector would have seen a dynamic inefficiency caused by data losses due to buffer overflows. For this reason the CMS collaboration has installed during the recent extended end of year shutdown a replacement pixel detector. The phase-1 upgrade of the CMS pixel detector will operate at high efficiency at an instantaneous luminosity of $2\\times10^{34}\\,\\text{cm}^{-2}\\text{s}^{-1}$ with increased detector acceptance and additional redundancy for the tracking, while at the same time reducing the material budget. These goals are achieved using a new read-out chip and modified powering and rea...

  4. Changes of ticagrelor formulary tiers in the USA: targeting private insurance providers away from government-funded plans.

    Science.gov (United States)

    Serebruany, Victor L; Dinicolantonio, James J

    2013-01-01

    Ticagrelor (Brilinta®) is a new oral reversible antiplatelet agent approved by the FDA in July 2011 based on the results of the PLATO (Platelet Inhibition and Patient Outcomes) trial. However, despite very favorable and broad indications, the current clinical utilization of ticagrelor is woefully small. We aimed to compare ticagrelor formulary tiers for major private (n = 8) and government-funded (n = 4) insurance providers for 2012-2013. Over the last year, ticagrelor placement improved, becoming a preferred drug (from Tier 3 in 2012 to Tier 2 in 2013) for Medco, moving from Tier 4 (with a prior approval requirement) to Tier 3 (no prior approval) for the United Health Care Private Plan and achieving Tier 3 status for Apex in 2013. In contrast, ticagrelor placement did not improve for New York Medicaid, retaining Tier 3 status. In addition, many Medicare Part D formularies have significantly worse coverage than most private plans. For example, Humana Medicare Part D has Tier 3 status requiring step therapy and quantity limits, SilverScript (CVS Caremark) Part D is Tier 3 and the American Association of Retired Persons (United Health Care) Medicare Part D is Tier 4 requiring prior approval. Ticagrelor formulary placement is significantly better for most private providers than for government-funded plans, which may possibly be due to the selective targeting of private insurance providers and the simultaneous avoidance of government-funded plans. © 2013 S. Karger AG, Basel.

  5. Hadron correlations in CMS

    CERN Document Server

    Maguire, Charles Felix

    2012-01-01

    The measurements of the anisotropic flow of single particles and particle pairs have provided some of the most compelling evidence for the creation of a strongly interacting quark-gluon plasma (sQGP) in relativistic heavy ion collisions, first at RHIC, and more recently at the LHC. Using PbPb collision data taken in the 2010 and 2011 heavy ion runs at the LHC, the CMS experiment has investigated a broad scope of these flow phenomena. The $v_2$ elliptic flow coefficient has been extracted with four different methods to cross-check contributions from initial state fluctuations and non-flow correlations. The measurements of the $v_2$ elliptic anisotropy have been extended to a transverse momentum of 60 GeV/c, which will enable the placement of new quantitative constraints on parton energy loss models as a function of path length in the sQGP medium. Additionally, for the first time at the LHC, the CMS experiment has extracted precise elliptic anisotropy coefficients for the neutral $\\pi$ meson ($\\pi^0$) in the c...

  6. 42 CFR 422.510 - Termination of contract by CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Termination of contract by CMS. 422.510 Section 422... Advantage Organizations § 422.510 Termination of contract by CMS. (a) Termination by CMS. CMS may at any time terminate a contract if CMS determines that the MA organization meets any of the following: (1...

  7. The Atlantic Coast of Maryland, Sediment Budget Update: Tier 2, Assateague Island and Ocean City Inlet

    Science.gov (United States)

    2016-06-01

    111 – Rivers and Harbors Act), the navigational structures at the Ocean City Inlet, and a number of Federally authorized channels (Figure 1). Reed...Tier 2, Assateague Island and Ocean City Inlet by Ernest R. Smith, Joseph C. Reed, and Ian L. Delwiche PURPOSE: This Coastal and Hydraulics...of the Atlantic Ocean shoreline within the U.S. Army Corps of Engineers (USACE) Baltimore District’s Area of Responsibility, which for coastal

  8. 42 CFR 423.509 - Termination of contract by CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Termination of contract by CMS. 423.509 Section 423... Contracts with Part D plan sponsors § 423.509 Termination of contract by CMS. (a) Termination by CMS. CMS may at any time terminate a contract if CMS determines that the Part D plan sponsor meets any of the...

  9. Explaining CMS lepton excesses with supersymmetry

    CERN Multimedia

    CERN. Geneva; Prof. Allanach, Benjamin

    2014-01-01

    1) Kostas Theofilatos will give an introduction to CMS result 2) Ben Allanach: Several CMS analyses involving di-leptons have recently reported small 2.4-2.8 sigma local excesses: nothing to get too excited about, but worth keeping an eye on nonetheless. In particular, a search in the $lljj p_T$(miss) channel, a search for $W_R$ in the $lljj$ channel and a di-leptoquark search in the $lljj$ channel and $ljj p_T$(miss) channel have all yielded small excesses. We interpret the first excess in the MSSM, showing that the interpretation is viable in terms of other constraints, despite only having squark masses of around 1 TeV. We can explain the last three excesses with a single R-parity violating coupling that predicts a non-zero contribution to the neutrinoless double beta decay rate.

  10. 42 CFR 422.210 - Assurances to CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Assurances to CMS. 422.210 Section 422.210 Public...) MEDICARE PROGRAM MEDICARE ADVANTAGE PROGRAM Relationships With Providers § 422.210 Assurances to CMS. (a) Assurances to CMS. Each organization will provide assurance satisfactory to the Secretary that the...

  11. Evolution of online algorithms in ATLAS and CMS in Run2

    CERN Document Server

    Tomei, Thiago R. F. P.

    2017-01-01

    The Large Hadron Collider has entered a new era in Run~2, with centre-of-mass energy of 13~TeV and instantaneous luminosity reaching $\\mathcal{L}_\\textrm{inst} = 1.4\\times$10\\textsuperscript{34}~cm\\textsuperscript{-2}~s\\textsuperscript{-1} for pp collisions. In order to cope with those harsher conditions, the ATLAS and CMS collaborations have improved their online selection infrastructure to keep a high efficiency for important physics processes -- like W, Z and Higgs bosons in their leptonic and diphoton modes -- whilst keeping the size of data stream compatible with the bandwidth and disk resources available. In this note, we describe some of the trigger improvements implemented for Run~2, including algorithms for selection of electrons, photons, muons and hadronic final states.

  12. Neutral Pion Rejection at L2 using the CMS Endcap Preshower

    CERN Document Server

    Kyriakis, Aristotelis; Loukas, Demetrios; Mousa, Jehad; Seez, Christopher

    1999-01-01

    Applying a general Artificial Neural Network approach, we have examined the possibility of neutral pion rejection at the Level 2 Trigger stage ( L2) principally using information from the CMS Endcap Preshower. We have studied both pion/photon and pion/electron discrimination. For L2 the hope was to achieve some useful pion/electron discrimination for a high electron efficiency. For a single electron/photon efficiency of 95% the results show that no useful rejection of neutral pions against electrons/photons can be obtained using this algorithm alone, due to the presence of tracker material. If the efficiency is lowered or information from the tracker is available, the rejection can increase dramatically. This will be the case for off-line analyses.

  13. Which Tier? Effects of Linear Assessment and Student Characteristics on GCSE Entry Decisions

    Science.gov (United States)

    Vitello, Sylvia; Crawford, Cara

    2018-01-01

    In England, students obtain General Certificate of Secondary Education (GCSE) qualifications, typically at age 16. Certain GCSEs are tiered; students take either higher-level (higher tier) or lower-level (foundation tier) exams, which may have different educational, career and psychological consequences. In particular, foundation tier entry, if…

  14. Monitoring and optimization of ATLAS Tier 2 center GoeGrid

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219638; Quadt, Arnulf; Yahyapour, Ramin

    The demand on computational and storage resources is growing along with the amount of information that needs to be processed and preserved. In order to ease the provisioning of the digital services to the growing number of consumers, more and more distributed computing systems and platforms are actively developed and employed. The building block of the distributed computing infrastructure are single computing centers, similar to the Worldwide LHC Computing Grid, Tier 2 centre GoeGrid. The main motivation of this thesis was the optimization of GoeGrid performance by efficient monitoring. The goal has been achieved by means of the GoeGrid monitoring information analysis. The data analysis approach was based on the adaptive-network-based fuzzy inference system (ANFIS) and machine learning algorithm such as Linear Support Vector Machine (SVM). The main object of the research was the digital service, since availability, reliability and serviceability of the computing platform can be measured according to the const...

  15. Soft QCD at CMS and ATLAS

    CERN Document Server

    Starovoitov, Pavel; The ATLAS collaboration

    2018-01-01

    A short overview of the recent soft QCD results from the ATLAS and CMS collaborations is presented. The inelastic cross section measurement by CMS at 13 TeV is summarised. The contribution of the diffractive processes to the very forward photon spectra studied by ATLAS and LHCf is discussed. The ATLAS measurements of the exclusive two-photon production of the muon pairs is presented and compared to the previous ATLAS and CMS results.

  16. 23 CFR 500.109 - CMS.

    Science.gov (United States)

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false CMS. 500.109 Section 500.109 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT MANAGEMENT AND MONITORING SYSTEMS Management Systems § 500.109 CMS. (a) For purposes of this part, congestion means the level at...

  17. CMS - The Compact Muon Solenoid

    CERN Multimedia

    Bergauer, T; Waltenberger, W; Kratschmer, I; Treberer-treberspurg, W; Escalante del valle, A; Andreeva, I; Innocente, V; Camporesi, T; Malgeri, L; Marchioro, A; Moneta, L; Weingarten, W; Beni, N T; Cimmino, A; Rovere, M; Jafari, A; Lange, C G; Vartak, A P; Gilbert, A J; Pantaleo, F; Reis, T; Cucciati, G; Alipour tehrani, N; Stakia, A; Fallavollita, F; Pizzichemi, M; Rauco, G; Zhang, S; Hu, T; Yazgan, E; Zhang, H; Thomas-wilsker, J; Reithler, H K V; Philipps, B; Merschmeyer, M K; Heidemann, C A; Mukherjee, S; Geenen, H; Kuessel, Y; Weingarten, S; Gallo, E; Schwanenberger, C; Walsh bastos rangel, R; Beernaert, K S; De wit, A M; Elwood, A C; Connor, P; Lelek, A A; Wichmann, K H; Myronenko, V; Kovalchuk, N; Bein, S L; Dreyer, T; Scharf, C; Quast, G; Dierlamm, A H; Barth, C; Mol, X; Kudella, S; Schafer, D; Schimassek, R R; Matorras, F; Calderon tazon, A; Garcia ferrero, J; Bercher, M J; Sirois, Y; Callier, S; Depasse, P; Laktineh, I B; Grenier, G; Boudoul, G; Heath, G P; Hartley, D A; Quinton, S; Tomalin, I R; Harder, K; Francis, V B; Thea, A; Zhang, Z; Loukas, D; Hernath, S T; Naskar, K; Colaleo, A; Maggi, G P; Maggi, M; Loddo, F; Calabria, C; Campanini, R; Cuffiani, M; D'antone, I; Grandi, C; Navarria, F; Guiducci, L; Battilana, C; Tosi, N; Gulmini, M; Meola, S; Longo, E; Meridiani, P; Marzocchi, B; Schizzi, A; Cho, S; Ha, S; Kim, D H; Kim, G N; Md halid, M F B; Yusli, M N B; Dominik, W M; Bunkowski, K; Olszewski, M; Byszuk, A P; Rasteiro da silva, J C; Varela, J; Leong, Q; Sulimov, V; Vorobyev, A; Denisov, A; Murzin, V; Egorov, A; Lukyanenko, S; Postoev, V; Pashenkov, A; Solovey, A; Rubakov, V; Troitsky, S; Kirpichnikov, D; Lychkovskaya, N; Safronov, G; Fedotov, A; Toms, M; Barniakov, M; Olimov, K; Fazilov, M; Umaraliev, A; Dumanoglu, I; Bakirci, N M; Dozen, C; Demiroglu, Z S; Isik, C; Zeyrek, M; Yalvac, M; Ozkorucuklu, S; Chang, Y; Dolgopolov, A; Gottschalk, E E; Maeshima, K; Heavey, A E; Kramer, T; Kwan, S W L; Taylor, L; Tkaczyk, S M; Mokhov, N; Marraffino, J M; Mrenna, S; Yarba, V; Banerjee, B; Elvira, V D; Gray, L A; Holzman, B; Dagenhart, W; Canepa, A; Ryu, S C; Strobbe, N C; Adelman-mc carthy, J K; Contescu, A C; Andre, J O; Wu, J; Dittmer, S J; Bucinskaite, I; Zhang, J; Karchin, P E; Thapa, P; Zaleski, S G; Gran, J L; Wang, S; Zilizi, G; Raics, P P; Bhardwaj, A; Naimuddin, M; Smiljkovic, N; Stojanovic, M; Brandao malbouisson, H; De oliveira martins, C P; Tonelli manganote, E J; Medina jaime, M; Thiel, M; Laurila, S H; Graehling, P; Tonon, N; Blekman, F; Postiau, N J S; Leroux, P J; Van remortel, N; Janssen, X J; Di croce, D; Aleksandrov, A; Shopova, M F; Dogra, S M; Shinoda, A A; Arce, P; Daniel, M; Navarrete marin, J J; Redondo fernandez, I; Guirao elias, A; Cela ruiz, J M; Lottin, J; Gras, P; Kircher, F; Levesy, B; Payn, A; Guilloux, F; Negro, G; Leloup, C; Pasztor, G; Panwar, L; Bhatnagar, V; Bruzzi, M; Sciortino, S; Starodubtsev, O; Azzi, P; Conti, E; Lacaprara, S; Margoni, M; Rossin, R; Tosi, M; Fano', L; Lucaroni, A; Biino, C; Dattola, D; Rotondo, F; Ballestrero, A; Obertino, M M; Kiani, M B; Paterno, A; Magana villalba, R; Ramirez garcia, M; Reyes almanza, R; Gorski, M; Wrochna, G; Bluj, M J; Zarubin, A; Nozdrin, M; Ladygin, V; Malakhov, A; Golunov, A; Skrypnik, A; Sotnikov, A; Evdokimov, N; Tiurin, V; Lokhtin, I; Ershov, A; Platonova, M; Tyurin, N; Slabospitskii, S; Talov, V; Belikov, N; Ryazanov, A; Chao, Y; Tsai, J; Foord, A; Wood, D R; Orimoto, T J; Luckey, P D; Jaditz, S H; Stephans, G S; Darlea, G L; Di matteo, L; Maier, B; Trovato, M; Bhattacharya, S; Roberts, J B; Padley, P B; Tu, Z; Rorie, J T; Clarida, W J; Tiras, E; Khristenko, V; Cerizza, G; Pieri, M; Krutelyov, V; Saiz santos, M D; Klein, D S; Derdzinski, M; Murray, M J; Gray, J A; Minafra, N; Castle, J R; Bowen, J L S; Buterbaugh, K; Morrow, S I; Bunn, J; Newman, H; Spiropulu, M; Balcas, J; Lawhorn, J M; Thomas, S D; Panwalkar, S M; Kyriacou, S; Xie, Z; Ojalvo, I R; Salfeld-nebgen, J; Laird, E M; Wimpenny, S J; Yates, B R; Perry, T M; Schiber, C C; Diaz, D C; Uniyal, R; Mesic, B; Kolosova, M; Snow, G R; Lundstedt, C; Johnston, D; Zvada, M; Weitzel, D J; Damgov, J V; Cowden, C S; Giammanco, A; David, P N Y; Zobec, J; Cabrera jamoulle, J B; Daubie, E; Nash, J A; Evans, L; Hall, G; Nikitenko, A; Ryan, M J; Huffman, M A J; Styliaris, E; Evangelou, I; Sharan, M K; Roy, A; Rout, P K; Kalbhor, P N; Bagliesi, G; Braccini, P L; Ligabue, F; Boccali, T; Rizzi, A; Minuti, M; Oh, S; Kim, J; Sen, S; Boz evinay, M; Xiao, M; Hung, W T; Jensen, F O; Mulholland, T D; Kumar, A; Jones, M; Roozbahani, B H; Neu, C C; Thacker, H B; Wolfe, E M; Jabeen, S; Gilmore, J; Winer, B L; Rush, C J; Luo, W; Alimena, J M; Ko, W; Lander, R; Broadley, W H; Shi, M; Furic, I K; Low, J F; Bortignon, P; Alexander, J P; Zientek, M E; Conway, J V; Padilla fuentes, Y L; Florent, A H; Bravo, C B; Crotty, I M; Wenman, D L; Sarangi, T R; Ghabrous larrea, C; Gomber, B; Smith, N C; Long, K D; Roberts, J M; Hildreth, M D; Jessop, C P; Karmgard, D J; Loukas, N; Ferbel, T; Zielinski, M A; Cooper, S I; Jung, A; Van driessche, W G M; Fagot, A; Vermassen, B; Valchkova-georgieva, F K; Dimitrov, D S; Roumenin, T S; Podrasky, V; Re, V; Zucca, S; De canio, F; Romaniuk, R; Teodorescu, L; Krofcheck, D; Anderson, N G; Bell, S T; Salazar ibarguen, H A; Kudinov, V; Onishchenko, S; Naujikas, R; Lyubynskiy, V; Sobolev, O; Khan, M S; Adeel-ur-rehman, A; Hassan, Q U; Ali, I; Kreuzer, P K; Robson, A J; Gadrat, S G; Ivanov, A; Mendis, D; Da silva di calafiori, D R; Zeinali, M; Behnamian, H; Moroni, L; Malvezzi, S; Park, I; Pastika, N J; Oropeza barrera, C; Elkhateeb, E A A; Elmetenawee, W; Mohammed, Y; Tayel, E S A; Mcclatchey, R H; Kovacs, Z; Munir, K; Odeh, M; Magradze, E; Oikashvili, B; Shingade, P; Shukla, R A; Banerjee, S; Kumar, S; Jashal, B K; Grzanka, L; Adam, W; Ero, J; Fabjan, C; Jeitler, M; Rad, N K; Auffray hillemanns, E; Charkiewicz, A; Fartoukh, S; Garcia de enterria adan, D; Girone, M; Glege, F; Loos, R; Mannelli, M; Meijers, F; Sciaba, A; Meschi, E; Ricci, D; Petrucciani, G; Daguin, J; Vazquez velez, C; Karavakis, E; Nourbakhsh, S; Rabady, D S; Ceresa, D; Karacheban, O; Beguin, M; Kilminster, B J; Ke, Z; Meng, X; Zhang, Y; Tao, J; Romeo, F; Spiezia, A; Cheng, L; Zhukov, V; Feld, L W; Autermann, C T; Fischer, R; Erdweg, S; Kress, T H; Dziwok, C; Hansen, K; Schoerner-sadenius, T M; Marfin, I; Keaveney, J M; Diez pardos, C; Muhl, C W; Asawatangtrakuldee, C; Defranchis, M M; Asmuss, J P; Poehlsen, J A; Stober, F M H; Vormwald, B R; Kripas, V; Gonzalez vazquez, D; Kurz, S T; Niemeyer, C; Rieger, J O; Borovkov, A; Shvetsov, I; Sieber, G; Caspart, R; Iqbal, M A; Sander, O; Metzler, M B; Ardila perez, L E; Ruiz jimeno, A; Fernandez garcia, M; Scodellaro, L; Gonzalez sanchez, J F; Curras rivera, E; Semeniouk, I; Ochando, C; Bedjidian, M; Giraud, N A; Mathez, H; Zoccarato, Y D; Ianigro, J; Galbit, G C; Flacher, H U; Shepherd-themistocleous, C H; French, M J; Hill, J A; Jones, L L; Markou, A; Bencze, G L; Mishra, D K; Netrakanti, P K; Jha, V; Chudasama, R; Katta, S; Venditti, R; Cristella, L; Braibant-giacomelli, S; Dallavalle, G; Fabbri, F; Codispoti, G; Borgonovi, L; Caponero, M A; Berti, L; Fienga, F; Dafinei, I; Organtini, G; Del re, D; Pettinacci, V; Park, S K; Lee, K S; Kang, M; Kim, B; Park, H K; Kong, D J; Lee, S; Pak, S I; Zolkapli, Z B; Konecki, M A; Walczak, M B; Bargassa, P; Viegas guerreiro leonardo, N T; Levchenko, P; Orishchin, E; Suvorov, V; Uvarov, L; Gruzinskii, N; Pristavka, A; Kozlov, V; Radovskaia, A; Solovey, A; Kolosov, V; Vlassov, E; Parygin, P; Tumasyan, A; Topakli, H; Boran, F; Akin, I V; Oz, C; Gulmez, E; Atakisi, I O; Bakken, J A; Govi, G M; Lewis, J D; Shaw, T M; Bailleux, D; Stoynev, S E; Sexton-kennedy, E M; Huang, C; Lincoln, D W; Roser, R; Ito, A; Adams, M R; Apanasevich, L; Varelas, N; Sandoval gonzalez, I D; Hangal, D A; Yoo, J H; Ovcharova, A K; Bradmiller-feld, J W; Amin, N J; Miller, M P; Patterson, A S; Sharma, R K; Santoro, A; Lassila-perini, K M; Tuominiemi, J; Voutilainen, M A; Wu, X; Gross, L O; Le bihan, A; Fuks, B; Kieffer, E; Pansanel, J; Jansova, M; D'hondt, J; Abuzeid hassan, S A; Bilin, B; Beghin, D; Soultanov, G; Vankov, I D; Konstantinov, P B; Marra da silva, J; De souza santos, A; Arruda ramalho, L; Renker, D; Erdmann, W; Molinero vela, A; Fernandez bedoya, C; Bachiller perea, I; Chipaux, R; Faure, J D; Hamel de monchenault, G; Mandjavidze, I; Rander, J; Ferri, F; Leroy, C L; Machet, M; Nagy, M I; Felcini, M; Kaur, S; Saizu, M A; Civinini, C; Latino, G; Checchia, P; Ronchese, P; Vanini, S; Fantinel, S; Cecchi, C; Leonardi, R; Arneodo, M; Ruspa, M; Pacher, L; Rabadan trejo, R I; Mondragon herrera, C A; Golutvin, I; Zhiltsov, V; Melnichenko, I; Mjavia, D; Cheremukhin, A; Zubarev, E; Kalagin, V; Alexakhin, V; Mitsyn, V; Shulha, S; Vishnevskiy, A; Gavrilenko, M; Boos, E E; Obraztsov, S; Dubinin, M; Demiyanov, A; Dudko, L; Azhgirey, I; Chikilev, O; Turchanovich, L; Rurua, L; Hou, G W; Wang, M; Chang, P; Kumar, A; Liau, J; Lazic, D; Lawson, P D; Zou, D; Wisecarver, A L; Sumorok, K C; Klute, M; Lee, Y; Iiyama, Y; Velicanu, D A; Mc ginn, C; Abercrombie, D R; Tatar, K; Hahn, K A; Nussbaum, T W; Southwick, D C; Cittolin, S; Martin, T; Welke, C V; Wilson, G W; Baringer, P S; Sanders, S J; Mcbrayer, W J; Engh, D J; Sheldon, P D; Gurrola, A; Velkovska, J A; Melo, A M; Padeken, K O; Johnson, C N; Ni, H; Montalvo, R J; Heindl, M D; Ferguson, T; Vogel, H; Mudholkar, T K; Elmer, P; Tully, C; Luo, J; Hanson, G; Jandir, P S; Askew, A W; Kadija, K; Dimovasili, E; Attikis, A; Vasilas, I; Chen, G; Bockelman, B P; Kamalieddin, R; Barrefors, B P; Farleigh, B S; Akchurin, N; Demin, P; Pavlov, B A; Petkov, P S; Goranova, R; Tomsa, J; Lyons, L; Buchmuller, O; Magnan, A; Laner ogilvy, C; Di maria, R; Dutta, S; Thakur, S; Bettarini, S; Bosi, F; Giassi, A; Massa, M; Calzolari, F; Androsov, K; Lee, H; Komurcu, Y; Kim, D W; Wagner, S R; Perloff, A S; Rappoccio, S R; Harrington, C I; Baden, A R; Ricci-tam, F; Kamon, T; Rathjens, D; Pernie, L; Larsen, D; Ji, W; Pellett, D E; Smith, J; Acosta, D E; Field, R D; Yelton, J M; Kotov, K; Wang, S; Smolenski, K W; Mc coll, N W; Dasu, S R; Lanaro, A; Cook, J R; Gorski, T A; Buchanan, J J; Jain, S; Musienko, Y; Taroni, S; Meng, H; Siddireddy, P K; Xie, W; Rott, C; Benedetti, D; Everett, A A; Schulte, J; Mahakud, B; Ryckbosch, D D E; Crucy, S; Cornelis, T G M; Betev, B; Dimov, H; Raykov, P A; Uzunova, D G; Mihovski, K T; Mechinsky, V; Makarenko, V; Yermak, D; Yevarouskaya, U; Salvini, P; Manghisoni, M; Fontaine, J; Agram, J; Palinkas, J; Reid, I D; Bell, A J; Clyne, M N; Zavodchikov, S; Veelken, C; Kannike, K; Dewanjee, R K; Skarupelov, V; Piibeleht, M; Ehataht, K; Chang, S; Kuchinski, P; Bukauskas, L; Zhmurin, P; Kamal, A; Mubarak, M; Asghar, M I; Ahmad, N; Muhammad, S; Mansoor-ul-islam, S; Saddique, A; Waqas, M; Irshad, A; Veckalns, V; Toda, S; Choi, Y K; Yu, I; Hwang, C; Yumiceva, F X; Djambazov, L; Meinhard, M T; Becker, R J U; Grimm, O; Wallny, R S; Tavolaro, V R; Eller, P D; Meister, D; Paktinat mehdiabadi, S; Chenarani, S; Dini, P; Leporini, R; Dinardo, M; Brianza, L; Hakkarainen, U T; Parashar, N; Malik, S; Ramirez vargas, J E; Dharmaratna, W; Noh, S; Uang, A J; Kim, J H; Lee, J S H; Jeon, D; You, Z; Assran, Y; Elgammal, S; Ellithi kamel, A Y; Nayak, A K; Dash, D; Koca, N; Kothekar, K K; Karnam, R; Patil, M R; Torims, T; Hoch, M; Schieck, J R; Valentan, M; Spitzbart, D; Lucio alves, F L; Blanchot, G; Gill, K A; Orsini, L; Petrilli, A; Sharma, A; Tsirou, A; Deile, M; Hudson, D A; Gutleber, J; Folch, R; Tropea, P; Cerminara, G; Vichoudis, P; Pardo, T; Sabba, H; Selvaggi, M; Verzetti, M; Ngadiuba, J; Kornmayer, A; Niedziela, J; Aarrestad, T K; He, K; Li, B; Huang, Q; Pierschel, G; Esch, T; Louis, D; Quast, T; Nowack, A S; Beissel, F; Borras, K A; Mankel, R; Pitzl, D D; Kemp, Y; Meyer, A B; Krucker, D B; Mittag, G; Burgmeier, A; Lenz, T; Arndt, T M; Pflitsch, S K; Danilov, V; Dominguez damiani, D; Cardini, A; Kogler, R; Troendle, D C; Aggleton, R C; Lange, J; Reimers, A C; De boer, W; Weber, M M; Theel, A; Mozer, M U; Wayand, S; Harrendorf, M A; Harbaum, T R; El morabit, K; Marco, J; Rodrigo, T; Vila alvarez, I; Lopez garcia, A; Rembser, J; Mathieu, A; Kurca, T; Mirabito, L; Verdier, P; Combaret, C; Newbold, D M; Smith, V; Brooke, J J; Metson, S; Coughlan, J A; Torbet, M J; Belyaev, A; Kyriakis, A; Horvath, D; Veszpremi, V; Topkar, A; Selvaggi-maggi, G; Nuzzo, S V; Romano, F; Marangelli, B; Spinoso, V; Lezki, S; Castro, A; Rovelli, T; Brigliadori, L; Bianco, S; Fabbricatore, P; Farinon, S; Musenich, R; Ferro, F; Gozzelino, A; Buontempo, S; Casolaro, P; Paramatti, R; Vignati, M; Belforte, S; Hong, B; Roh, Y J; Choi, S Y; Son, D; Yang, Y C; Butanov, K; Kotobi, A; Krolikowski, J; Pozniak, K T; Misiura, M; Seixas, J C; Jain, A K; Nemallapudi, M V; Shchipunov, L; Lebedev, V; Skorobogatov, V; Klimenko, K; Terkulov, A; Kirakosyan, M; Azarkin, M; Krasnikov, N; Stepanova, L; Gavrilov, V; Spiridonov, A; Semenov, S; Krokhotin, A; Rusinov, V; Chistov, R; Zhemchugov, E; Nishonov, M; Hmayakyan, G; Khachatryan, V; Ozdemir, K; Ozturk, S; Tali, B; Kangal, E E; Turkcapar, S; Zorbakir, I S; Aliyev, T; Demir, D A; Liu, W; Apollinari, G; Osborne, I; Genser, K; Lammel, S; Whitmore, J; Mommsen, R; Apyan, A; Badgett jr, W F; Atac, M; Joshi, U P; Vidal, R A; Giacchetti, L A; Merkel, P; Johnson, M E; Soha, A L; Tran, N V; Rapsevicius, V; Hirschauer, J F; Voirin, E; Altunay cheung, M; Liu, T T; Mosquera morales, J F; Gerber, C E; Chen, X; Clarke, C J; Stuart, D D; Franco sevilla, M; Marsh, B J; Shivpuri, R K; Adzic, P; De almeida pacheco, M A; Matos figueiredo, D; De queiroz franco, A B; Melo de almeida, M; Bernardo valadao, R; Linden, T; Tuovinen, E V; Jarvinen, T T; Siikonen, H J L; Ripp-baudot, I L; Richer, M; Vander velde, C; Randle-conde, A S; Dong, J; Van haevermaet, H J H; Dimitrov, L; De paula bianchini, C; Muller cascadan, A; Kotlinski, B; Alcaraz maestre, J; Josa mutuberria, M I; Gonzalez lopez, O; Marin munoz, J; Puerta pelayo, J; Rodriguez vazquez, J J; Denegri, D; Jarry, P; Rosowsky, A; Tsipolitis, G; Grunewald, M; Singh, J; Chawla, R; Gupta, R; Giordano, F; Parrini, G; Russo, L; Dosselli, U; Mazzucato, M; Verlato, M; Wulzer, A; Traldi, S; Bortolato, D; Biasini, M; Bilei, G M; Movileanu, M; Santocchia, A; Mariani, V; Mariotti, C; Monaco, V; Accomando, E; Pinna angioni, G L; Boimska, B; Yuldashev, B; Kamenev, A; Belotelov, I; Filozova, I; Bunin, P; Golovanov, G; Gribushin, A; Kaminskiy, A; Volkov, P; Vorotnikov, G; Bityukov, S; Kryshkin, V; Petrov, V; Volkov, A; Troshin, S; Levin, A; Sumaneev, O V; Kalinin, A; Kulagin, N; Mandrik, P; Lin, C; Kovalskyi, D; Demiragli, Z; Hsu, D G; Michlin, B A; Fountain, M; Debbins, P A; Durgut, S; Tadel, M; White, A; Molina-perez, J A; Dost, J M; Boren, S S; Klein, A; Bhatti, A; Mesropian, C; Wilkinson, R; Xie, S; Marlow, D R; Jindal, P; Palmer, C A; Narain, M; Berry, E A; Usai, E; Korotkov, A L; Strossman, W; Kennedy, E; Burt, K F; Saha, A; Starodumov, A; Mavromanolakis, G; Nicolaou, C; Mao, Y; Claes, D R; Sill, A F; Lamichhane, K; Antunovic, Z; Piotrzkowski, K; Bondu, O; Dimitrov, A A; Albajar, C; Torga teixeira, R F; Iles, G M; Borg, J; Cripps, N A; Uchida, K; Fayer, S W; Wright, J C; Kokkas, P; Manthos, N; Bhattacharya, S; Nandan, S; Bellazzini, R; Carboni, A; Arezzini, S; Yang, U K; Roskes, J; Corcodilos, L A; Nauenberg, U; Johnson, D; Kharchilava, A; Mc lean, C A; Cox, B B; Hirosky, R J; Cummings, G E; Skuja, A; Bard, R L; Mueller, R D; Puigh, D M; Chertok, M B; Calderon de la barca sanchez, M; Gunion, J F; Vogt, R; Conway, R T; Gearhart, J W; Band, R E; Kukral, O; Korytov, A; Fu, Y; Madorsky, A; Brinkerhoff, A W; Rinkevicius, A; Mcdermott, K P; Tao, Z; Bellis, M; Gronberg, J B; Hauser, J; Bachtis, M; Kubic, J; Nash, W A; Greenler, L S; Caillol, C S; Woods, N; De jesus pardal vicente, M; Trembath-reichert, S; Singovski, A; Wolf, M; Smith, G N; Bucci, R E; Reinsvold, A C; Rupprecht, N C; Taus, R A; Buccilli, A T; Kroeger, R S; Reidy, J J; Barnes, V E; Kress, M K; Thieman, J R; Mccartin, J W; Gul, M; Khvastunov, I; Georgiev, I G; Biselli, A; Berzano, U; Vai, I; Braghieri, A; Cardoso lopes, R; Cuevas maestro, J F; Palencia cortezon, J E; Reucroft, S; Bheesette, S; Butler, A; Ivanov, A; Mizelkov, M; Kashpydai, O; Kim, J; Janulis, M; Zemleris, V; Ali, A; Ahmed, U S; Awan, M I; Lee, J; Dissertori, G; Pauss, F; Musella, P; Gomez espinosa, T A; Pigazzini, S; Vesterbacka olsson, M L; Klijnsma, T; Khakzad, M; Arfaei, H; Bonesini, M; Ciriolo, V; Gomez moreno, B; Linares garcia, L E; Bae, S; Ko, B; Hatakeyama, K; Mahmoud mohammed, M A; Aly, A; Ahmad, A; Bahinipati, S; Kim, T J; Goh, J; Fang, W; Kemularia, O; Melkadze, A; Sharma, S; Rane, A P; Ayala amaya, E R; Akle, B; Palomo pinto, F R; Madlener, T; Spanring, M; Pol, M E; Alda junior, W L; Rodrigues simoes moreira, P; Kloukinas, K; Onnela, A T O; Passardi, G; Perez, E F; Postema, W J; Petagna, P; Gaddi, A; Vieira de castro ferreira da silva, P M; Gastal, M; Dabrowski, A E; Mersi, S; Bianco, M; Alandes pradillo, M; Chen, Y; Kieseler, J; Bawej, T A; Roedne, L T; Hugo, G; Baschiera, M; Loiseau, T L; Donato, S; Wang, Y; Liu, Z; Yue, X; Teng, C; Wang, Z; Liao, H; Zhang, X; Chen, Y; Ahmad, M; Zhao, H; Qi, F; Li, B; Raupach, F; Tonutti, M P; Radziej, M; Fluegge, G; Haj ahmad, W; Kunsken, A; Roy, D M; Ziemons, T; Behrens, U; Henschel, H M; Kleinwort, C H; Dammann, D J; Van onsem, G P; Contreras campana, C J; Penno, M; Haranko, M; Singh, A; Turkot, O; Scheurer, V; Schleper, P; Schwandt, J; Schwarz, D; Hartmann, F; Muller, T; Mallows, S; Funke, D; Baselga bacardit, M; Mitra, S; Martinez rivero, C; Moya martin, D; Hidalgo villena, S; Chazin quero, B; Mine, P M G; Poilleux, P R; Salerno, R A; Martin perez, C; Amendola, C; Caponetto, L; Pugnere, D Y; Giraud, Y A N; Sordini, V; Grimes, M A; Burns, D J P; Harper, S J; Hajdu, C; Vami, T A; Dutta, D; Pant, L M; Kumar, V; Sarin, P; Di florio, A; Giacomelli, P; Montanari, A; Siroli, G P; Robutti, E; Maron, G; Fabozzi, F; Galati, G; Rovelli, C I; Della ricca, G; Vazzoler, F; Oh, Y D; Park, W H; Kwon, K H; Choi, J; Kalinowski, A; Santos amaral, L C; Di francesco, A; Velichko, G; Smirnov, I; Kozlov, V; Vavilov, S; Kirianov, A; Dremin, I; Rusakov, S; Nechitaylo, V; Kovzelev, A; Toropin, A; Anisimov, A; Barniakov, A; Gasanov, E; Eskut, E; Polatoz, A; Karaman, T; Zorbilmez, C; Bat, A; Tok, U G; Dag, H; Kaya, O; Tekten, S; Lin, T; Abdoulline, S; Bauerdick, L; Denisov, D; Gingu, C; Green, D; Nahn, S C; Prokofiev, O E; Strait, J B; Los, S; Bowden, M; Tanenbaum, W M; Guo, Y; Dykstra, D W; Mason, D A; Chlebana, F; Cooper, W E; Anderson, J M K; Weber, H A; Christian, D C; Alyari, M F; Diaz cruz, J A; Wang, M; Berry, D R; Siehl, K F; Poudyal, N; Kyre, S A; Mullin, S D; George, C; Szabo, Z; Malhotra, S; Milosevic, J; Prado da silva, W L; Martins mundim filho, L; Sanchez rosas, L J; Karimaki, V J; Toor, S Z; Karadzhinova, A G; Maazouzi, C; Van hove, P J; Hosselet, J; Goorens, R; Brun, H L; Kalsi, A K; Wang, Q; Vannerom, D; Antchev, G; Iaydjiev, P S; Mitev, G M; Amadio, G; Langenegger, U; Kaestli, H C; Meier, B; Fernandez ramos, J P; Besancon, M; Fabbro, B; Ganjour, S; Locci, E; Gevin, O; Suranyi, O; Bansal, S; Kumar, R; Sharma, S; Tuve, C N; Tricomi, A; Meschini, M; Paoletti, S; Sguazzoni, G; Gori, V; Carlin, R; Dal corso, F; Simonetto, F; Torassa, E; Zumerle, G; Borsato, E; Gonella, F; Dorigo, A; Larsen, H; Peroni, C; Trapani, P P; Buarque franzosi, D; Tamponi, U; Mejia guisao, J A; Zepeda fernandez, C H; Szleper, M; Zalewski, P D; Rybka, D K; Gorbunov, I; Perelygin, V; Kozlov, G; Semenov, R; Khvedelidze, A; Kodolova, O; Klyukhin, V; Snigirev, A; Kryukov, A; Ukhanov, M; Sobol, A; Bayshev, I; Akimenko, S; Lei, Y; Chang, Y; Kao, K; Lin, S; Yu, P; Li, Y; Fantasia, C; Gastler, D E; Paus, C; Wyslouch, B; Knuteson, B O; Azzolini, V; Goncharov, M; Brandt, S; Chen, Z; Liu, J; Chen, Z; Freed, S M; Zhang, A; Nachtman, J M; Penzo, A; Akgun, U; Yi, K; Rahmat, R; Gandrajula, R P; Dilsiz, K; Letts, J; Sharma, V A; Holzner, A G; Wuerthwein, F K; Padhi, S; Suarez silva, I M; Tapia takaki, D J; Stringer, R W; Kropivnitskaya, A; Majumder, D; Al-bataineh, A A; Gabella, W E; Johns, W E; Mora, J G; Shi, Z; Ciesielski, R A; Bornheim, A; Bartz, E H; Doroshenko, J; Halkiadakis, E; Salur, S; Robles, J A; Gray, R C; Saka, H; Osherson, M A; Hughes, E J; Paulini, M G; Russ, J S; Jang, D W; Piroue, P; Olsen, J D; Sands, W; Saluja, S; Cutts, D; Hadley, M H; Hakala, J C; Clare, R; Luthra, A P; Paneva, M I; Seto, R K; Mac intire, D A; Tentindo, S; Wahl, H; Chokheli, D; Micanovic, S; Razis, P; Mousa, J; Pantelides, S; Qian, S; Li, W; Stieger, B B; Lee, S W; Michotte de welle, D; De favereau de jeneret, J; Bakhshiansohi, H; Krintiras, G; Caputo, C; Sabev, C; Batinkov, A I; Zenz, S C; Pesaresi, M F; Summers, S P; Saoulidou, N; Koraka, C K; Ghosh, S; Sikdar, A K; Castaldi, R; Dell'orso, R; Palmonari, F; Rolandi, L; Moggi, A; Fedi, G; Coscetti, S; Seo, S H; Cankocak, K; Cumalat, J P; Smith, J G; Iashvili, I; Gallo, S M; Parker, A M; Ledovskoy, A; Hung, P Q; Vaman, D; Goodell, J D; Gomez, J A; Celik, A; Luo, S; Hill, C S; Francis, B P; Tripathi, S M; Squires, M K; Thomson, J A; Brainerd, C; Tuli, S; Bourilkov, D; Mitselmakher, G; Patterson, J R; Kuznetsov, V Y; Tan, S M; Strohman, C R; Rebassoo, F O; Valouev, V; Zelepukin, S; Lusin, S; Vuosalo, C O U; Ruggles, T H; Rusack, R; Woodard, A E; Meng, F; Dev, N; Vishnevskiy, D; Cremaldi, L M; Oliveros tautiva, S J; Jones, T M; Wang, F; Zaganidis, N; Tytgat, M G; Fedorov, A; Korjik, M; Panov, V; Montagna, P; Vitulo, P; Traversi, G; Gonzalez caballero, I; Eysermans, J; Logatchev, O; Orlov, A; Tikhomirov, A; Kulikova, T; Strumia, A; Nam, S K; Soric, I; Padimanskas, M; Siddiqi, H M; Qazi, S F; Ahmad, M; Makouski, M; Chakaberia, I; Mitchell, T B; Baarmand, M; Hits, D; Theofilatos, K; Mohr, N; Jimenez estupinan, R; Micheli, F; Pata, J; Corrodi, S; Mohammadi najafabadi, M; Menasce, D L; Pedrini, D; Malberti, M; Linn, S L; Mesa, D; Tuuva, T; Carrillo montoya, C A; Roque romero, G A; Suwonjandee, N; Kim, H; Khalil ibrahim, S S; Mahrous mohamed kassem, A M; Trojman, L; Sarkar, U; Bhattacharya, S; Babaev, A; Okhotnikov, V; Nakad, Z S; Fruhwirth, R; Majerotto, W; Mikulec, I; Rohringer, H; Strauss, J; Krammer, N; Hartl, C; Pree, E; Rebello teles, P; Ball, A; Bialas, W; Brachet, S B; Gerwig, H; Lourenco, C; Mulders, M P; Vasey, F; Wilhelmsson, M; Dobson, M; Botta, C; Dunser, M F; Pol, A A; Suthakar, U; Takahashi, Y; De cosa, A; Hreus, T; Chen, G; Chen, H; Jiang, C; Yu, T; Klein, K; Schulz, J; Preuten, M; Millet, P N; Keller, H C; Pistone, C; Eckerlin, G; Jung, J; Mnich, J; Jansen, H; Wissing, C; Savitskyi, M; Eichhorn, T V; Harb, A; Botta, V; Martens, I; Knolle, J; Eren, E; Reichelt, O; Schutze, P J; Saibel, A; Schettler, H H; Schumann, S; Kutzner, V G; Husemann, U; Giffels, M; Akbiyik, M; Friese, R M; Baur, S S; Faltermann, N; Kuhn, E; Gottmann, A I D; Muller, D; Balzer, M N; Maier, S; Schnepf, M J; Wassmer, M; Renner, C W; Tcherniakhovski, D; Piedra gomez, J; Vilar cortabitarte, R; Trevisani, N; Boudry, V; Charlot, C P; Tran, T H; Thiant, F; Lethuillier, M M; Perries, S O; Popov, A; Morrissey, Q; Brummitt, A J; Bell, S J; Assiouras, P; Sikler, F; De palma, M; Fiore, L; Pompili, A; Marzocca, C; Errico, F; Soldani, E; Cavallo, F R; Rossi, A M; Torromeo, G; Masetti, G; Virgilio, S; Thyssen, F D M; Iorio, A O M; Montecchi, M; Santanastasio, F; Bulfon, C; Zanetti, A M; Casarsa, M; Han, D; Song, J; Ibrahim, Z A B; Faccioli, P; Gallinaro, M; Beirao da cruz e silva, C; Kuznetsova, E; Levchuk, L; Andreev, V; Toropin, A; Dermenev, A; Karpikov, I; Epshteyn, V; Uliyanov, A; Polikarpov, S; Markin, O; Cagil, A; Karapinar, G; Isildak, B; Yu, S; Banicz, K B; Cheung, H W K; Butler, J N; Quigg, D E; Hufnagel, D; Rakness, G L; Spalding, W J; Bhat, P; Kreis, B J; Jensen, H B; Chetluru, V; Albert, M; Hu, Z; Mishra, K; Vernieri, C; Larson, K E; Zejdl, P; Matulik, M; Cremonesi, M; Doualot, N; Ye, Z; Wu, Z; Geffert, P B; Dutta, V; Heller, R E; Dorsett, A L; Choudhary, B C; Arora, S; Ranjeet, R; Melo da costa, E; Torres da silva de araujo, F; Da silveira, G G; Alves coelho, E; Belchior batista das chagas, E; Buss, N H; Luukka, P R; Tuominen, E M; Havukainen, J J; Tigerstedt, U B S; Goerlach, U; Patois, Y; Collard, C; Mathieu, C; Lowette, S R J; Python, Q P; Moortgat, S; Vanlaer, P; De lentdecker, G W P; Rugovac, S; Tavernier, F F; Beaumont, W; Van de klundert, M; Vankov, P H; Verguilov, V Z; Hadjiiska, R M; De moraes gregores, E; Iope, R L; Ruiz vargas, J C; Barcala riveira, M J; Hernandez calama, J M; Oller, J C; Flix molina, J; Navarro tobar, A; Sastre alvaro, J; Redondo ferrero, D D; Titov, M; Bausson, P; Major, P; Bala, S; Dhingra, N; Kumari, P; Costa, S; Pelli, S; Meneguzzo, A T; Passaseo, M; Pegoraro, M; Montecassiano, F; Dorigo, T; Silvestrin, L; Del duca, V; Demaria, N; Ferrero, M I; Mussa, R; Cartiglia, N; Mazza, G; Maina, E; Dellacasa, G; Covarelli, R; Cotto, G; Sola, V; Monteil, E; Shchelina, K; Castilla-valdez, H; De la cruz burelo, E; Kazana, M; Gorbunov, N; Kosarev, I; Smirnov, V; Korenkov, V; Savina, M; Lanev, A; Semenyushkin, I; Kashunin, I; Krouglov, N; Markina, A; Bunichev, V; Zotov, N; Miagkov, I; Nazarova, E; Uzunyan, A; Riutin, R; Tsverava, N; Paganis, E; Chen, K; Lu, R; Psallidas, A; Gorodetzky, P P; Hazen, E S; Avetisyan, A; Richardson, C A; Busza, W; Roland, C E; Cali, I A; Marini, A C; Wang, T; Schmitt, M H; Geurts, F; Ecklund, K M; Repond, J O; Schmidt, I; George, N; Ingram, F D; Wetzel, J W; Ogul, H; Spanier, S M; Mrak tadel, A; Zevi della porta, G J; Maguire, C F; Janjam, R K; Chevtchenko, S; Zhu, R; Voicu, B R; Mao, J; Stone, R L; Schnetzer, S R; Nash, K C; Kunnawalkam elayavalli, R; Laflotte, I; Weinberg, M G; Mc cracken, M E; Kalogeropoulos, A; Raval, A H; Cooperstein, S B; Landsberg, G; Kwok, K H M; Ellison, J A; Gary, J W; Si, W; Hagopian, V; Hagopian, S L; Bertoldi, M; Brigljevic, V; Ptochos, F; Ather, M W; Konstantinou, S; Yang, D; Li, Q; Attebury, G; Siado castaneda, J E; Lemaitre, V; Caebergs, T P M; Litov, L B; Fernandez de troconiz, J; Colling, D J; Davies, G J; Raymond, D M; Virdee, T S; Bainbridge, R J; Lewis, P; Rose, A W; Bauer, D U; Sotiropoulos, S; Papadopoulos, I; Triantis, F; Aslanoglou, X; Majumdar, N; Devadula, S; Ciocci, M A; Messineo, A; Palla, F; Grippo, M T; Yu, G B; Willemse, T; Lamsa, J; Blumenfeld, B J; Maksimovic, P; Gritsan, A; Cocoros, A A; Arnold, P; Tonwar, S C; Eno, S C; Mignerey, A L C; Nabili, S; Dalchenko, M; Maghrbi, Y; Huang, T; Sheharyar, A; Durkin, L S; Wang, Z; Tos, K M; Kim, B J; Guo, Y; Ma, P; Rosenzweig, D J; Reeder, D D; Smith, W; Surkov, A; Mohapatra, A K; Maurisset, A; Mans, J M; Kubota, Y; Frahm, E J; Chatterjee, R M; Ruchti, R; Mc cauley, T P; Ivie, P A; Betchart, B A; Hindrichs, O H; Sultana, M; Henderson, C; Sanders, D; Summers, D; Perera, L; Miller, D H; Miyamoto, J; Peng, C; Zahariev, R Z; Peynekov, M M; Ratti, L; Ressegotti, M; Czellar, S; Molnar, J; Khan, A; Morton, A; Vischia, P; Erice cid, C F; Carpinteyro bernardino, S; Chmelev, D; Smetannikov, V; Hektor, A; Kadastik, M; Godinovic, N; Simelevicius, D; Alvi, O I; Hoorani, H U R; Shahzad, H; Shah, M A; Shoaib, M; Rao, M A S; Sidwell, R; Roettger, T J; Corkill, S; Lustermann, W; Roeser, U H; Backhaus, M; Perrin, G L; Naseri, M; Rapuano, F; Redaelli, N; Carbone, L; Spiga, F; Brivio, F; Monti, F; Markowitz, P E; Rodriguez, J L; Morelos pineda, A; Norberg, S R; Ryu, M S; Jeng, Y G; Esteban lallana, M C; Trabelsi, A; Dittmann, J R; Elsayed, E; Khan, Z A; Soomro, K; Janikashvili, M; Kapoor, A; Rastogi, A; Remnev, G; Hrubec, J; Wulz, C; Fichtinger, S K; Abbaneo, D; Janot, P; Racz, A; Roche, J; Ryjov, V; Sphicas, P; Treille, D; Wertelaers, P; Cure, B R; Fulcher, J R; Moortgat, F W; Bocci, A; Giordano, D; Hegeman, J G; Hegner, B; Gallrapp, C; Cepeda hermida, M L; Riahi, H; Chapon, E; Orfanelli, S; Guilbaud, M R J; Seidel, M; Merlin, J A; Heidegger, C; Schneider, M A; Robmann, P W; Salerno, D N; Galloni, C; Neutelings, I W; Shi, J; Li, J; Zhao, J; Pandoulas, D; Rauch, M P; Schael, S; Hoepfner, K; Weber, M K; Teyssier, D F; Thuer, S; Rieger, M; Albert, A; Muller, T; Sert, H; Lohmann, W F; Ntomari, E; Grohsjean, A J; Wen, Y; Ron alvarez, E; Hampe, J; Bin anuar, A A; Blobel, V; Mattig, S; Haller, J; Sonneveld, J M; Malara, A; Rabbertz, K H; Freund, B; Schell, D B; Savoiu, D; Geerebaert, Y; Becheva, E L; Nguyen, M A; Stahl leiton, A G; Magniette, F B; Fay, J; Gascon-shotkin, S M; Ille, B; Viret, S; Finco, L; Brown, R; Cockerill, D; Williams, T S; Markou, C; Anagnostou, G; Mohanty, A K; Creanza, D M; De robertis, G; Verwilligen, P O J; Perrotta, A; Fanfani, A; Ciocca, C; Ravera, F; Toniolo, N; Badoer, S; Paolucci, P; Khan, W A; Voevodina, E; De iorio, A; Cavallari, F; Bellini, F; Cossutti, F; La licata, C; Da rold, A; Lee, K; Go, Y; Park, J; Kim, M S; Wan abdullah, W; Toldaiev, O; Golovtcov, V; Oreshkin, V; Sosnov, D; Soroka, D; Gninenko, S; Pivovarov, G; Erofeeva, M; Pozdnyakov, I; Danilov, M; Tarkovskii, E; Chadeeva, M; Philippov, D; Bychkova, O; Kardapoltsev, L; Onengut, G; Cerci, S; Vergili, M; Dolek, F; Sever, R; Gamsizkan, H; Ocalan, K; Dogan, H; Kaya, M; Kuo, C; Chang, Y; Albrow, M G; Banerjee, S; Berryhill, J W; Chevenier, G; Freeman, J E; Green, C H; O'dell, V R; Wenzel, H; Lukhanin, G; Di luca, S; Spiegel, L G; Deptuch, G W; Ratnikova, N; Paterno, M F; Burkett, K A; Jones, C D; Klima, B; Fagan, D; Hasegawa, S; Thompson, R; Gecse, Z; Liu, M; Pedro, K J; Jindariani, S; Zimmerman, T; Skirvin, T M; Hofman, D J; Evdokimov, O; Jung, K E; Trauger, H C; Gouskos, L; Karancsi, J; Kumar, A; Garg, R B; Keshri, S; Nogima, H; Sznajder, A; Vilela pereira, A; Eerola, P A; Pekkanen, J T K; Guldmyr, J H; Gele, D; Charles, L; Bonnin, C; Bourgatte, G; De clercq, J T; Favart, L; Grebenyuk, A; Yang, Y; Allard, Y; Genchev, V I; Galli mercadante, P; Tomei fernandez, T R; Ahuja, S; Ingram, Q; Rohe, T V; Colino, N; Ferrando, A; Garcia-abia, P; Calvo alamillo, E; Goy lopez, S; Delgado peris, A; Alvarez fernandez, A; Couderc, F; Moudden, Y; Potenza, R; D'alessandro, R; Landi, G; Viliani, L; Bisello, D; Gasparini, F; Michelotto, M; Benettoni, M; Bellato, M A; Fanzago, F; De castro manzano, P; Mantovani, G; Menichelli, M; Passeri, D; Placidi, P; Manoni, E; Storchi, L; Cirio, R; Romero, A; Staiano, A; Pastrone, N; Solano, A M; Argiro, S; Bellan, R; Duran osuna, M C; Ershov, Y; Zamyatin, N; Palchik, V; Afanasyev, S; Nikonov, E; Miller, M; Baranov, A; Ivanov, V; Petrushanko, S; Perfilov, M; Eyyubova, G; Baskakov, A; Kachanov, V; Korablev, A; Bordanovskiy, A; Kepuladze, Z; Hsiung, Y B; Wu, S; Rankin, D S; Jacob, C J; Alverson, G; Hortiangtham, A; Roland, G M; Gomez ceballos retuerto, G; Innocenti, G M; Allen, B L; Baty, A A; Narayanan, S M; Hu, M; Bi, R; Sung, K K H; Gunter, T K; Bueghly, J D; Yepes stork, P P; Mestvirishvili, A; Miller, M J; Norbeck, J E; Snyder, C M; Branson, J G; Sfiligoi, I; Rogan, C S; Edwards-bruner, C R; Young, R W; Verweij, M; Goulianos, K; Galvez, P D; Zhu, K; Lapadatescu, V; Dutta, I; Somalwar, S V; Park, M; Kaplan, S M; Feld, D B; Vorobiev, I; Lange, D; Zuranski, A M; Mei, K; Knight iii, R R; Spencer, E; Hogan, J M; Syarif, R; Olmedo negrete, M A; Ghiasi shirazi, S; Erodotou, E; Ban, Y; Xue, Z; Kravchenko, I; Keller, J D; Knowlton, D P; Wigmans, M E J; Volobouev, I; Peltola, T H T; Kovac, M; Bruno, G L; Gregoire, G; Delaere, C; Bodlak, M; Della negra, M J; James, T O; Shtipliyski, A M; Tziaferi, E; Karageorgos, V W; Karasavvas, D; Fountas, K; Mukhopadhyay, S; Basti, A; Raffaelli, F; Spandre, G; Mazzoni, E; Manca, E; Mandorli, G; Yoo, H D; Aerts, A; Eminizer, N C; Amram, O; Stenson, K M; Ford, W T; Green, M L; Kellogg, R; Jeng, G; Kunkle, J M; Baron, O; Feng, Y; Wong, K; Toufique, Y; Sehgal, V; Breedon, R E; Cox, P T; Mulhearn, M J; Gerhard, R M; Taylor, D N; Konigsberg, J; Sperka, D M; Lo, K H; Carnes, A M; Quach, D M; Li, T; Andreev, V; Herve, L A M; Klabbers, P R; Svetek, A; Hussain, U; Evans, A C; Lannon, K P; Fedorov, S; Bodek, A; Demina, R; Khukhunaishvili, A; West, C A; Perez, C U; Godang, R; Meier, M; Neumeister, N; Gruchala, M M; Zagurski, K B; Prosolovich, V; Kuhn, J; Ratti, S P; Riccardi, C M; Vacchi, C; Szekely, G; Hobson, P R; Fernandez menendez, J; Rodriguez bouza, V; Butler, P; Pedraza morales, M I; Barakat, N; Sakharov, V; Lavrenov, P; Ahmed, I; Kim, T Y; Pac, M Y; Sculac, T; Gajdosik, T; Tamosiunas, K; Juodagalvis, A; Dudenas, V; Barannik, S; Bashir, A; Khan, F; Saeed, F; Khan, M T; Maravin, Y; Mohammadi, A; Noonan, D C; Saunders, M D; Dittmar, M; Donega, M; Perrozzi, L; Nageli, C; Dorfer, C; Zhu, D H; Spirig, Y A; Ruini, D; Alishahiha, M; Ardalan, F; Saramad, S; Mansouri, R; Eskandari tadavani, E; Ragazzi, S; Tabarelli de fatis, T; Govoni, P; Ghezzi, A; Stringhini, G; Sevilla moreno, A C; Smith, C J; Abdelalim, A A; Hassan, A F A; Swain, S K; Sahoo, D K; Carrera jarrin, E F; Chauhan, S; Munoz chavero, F; Ambrogi, F; Hensel, C; Alves, G A; Baechler, J; Christiansen, J; De roeck, A; Gayde, J; Hansen, M; Kienzle, W; Reynaud, S; Schwick, C; Troska, J; Zeuner, W D; Osborne, J A; Moll, M; Franzoni, G; Tinoco mendes, A D; Milenovic, P; Garai, Z; Bendavid, J L; Dupont, N A; Gulhan, D C; Daponte, V; Martinez turtos, R; Giuffredi, R; Rapacz, K J; Otiougova, P; Zhu, G; Leggat, D A; Kiesel, M K; Lipinski, M; Wallraff, W; Meyer, A; Pook, T; Pooth, O; Behnke, O; Eckstein, D; Fischer, D J; Garay garcia, J; Vagnerini, A; Klanner, R; Stadie, H; Perieanu, A; Benecke, A; Abbas, S M; Schroeder, M; Lobelle pardo, P; Chwalek, T; Heidecker, C; Floh, K M; Gomez, G; Cabrillo bartolome, I J; Orviz fernandez, P; Duarte campderros, J; Busson, P; Dobrzynski, L; Fontaine, G R R; Granier de cassagnac, R; Paganini, P R J; Arleo, F P; Balagura, V; Martin blanco, J; Ortona, G; Kucher, I; Contardo, D C; Lumb, N; Baulieu, G; Lagarde, F; Shchablo, K; Heath, H F; Kreczko, L; Clement, E J; Paramesvaran, S; Bologna, S; Bell, K W; Petyt, D A; Moretti, S; Durkin, T J; Daskalakis, G; Kataria, S K; Iaselli, G; Pugliese, G; My, S; Sharma, A; Abbiendi, G; Taneja, S; Benussi, L; Fabbri, F; Calvelli, V; Frizziero, E; Barone, L M; De notaristefani, F; D'imperio, G; Gobbo, B; Yusupov, H; Liew, C S; Zabolotny, W M; Sobolev, S; Gavrikov, Y; Kozlov, I; Golubev, N; Andreev, Y; Tlisov, D; Zaytsev, V; Stepennov, A; Popova, E; Kolchanova, A; Shtol, D; Sirunyan, A; Gokbulut, G; Kara, O; Damarseckin, S; Guler, A M; Ozpineci, A; Hayreter, A; Li, S; Gruenendahl, S; Yarba, J; Para, A; Ristori, L F; Rubinov, P M; Reichanadter, M A; Churin, I; Beretvas, A; Muzaffar, S M; Lykken, J D; Gutsche, O; Baldin, B; Uplegger, L A; Lei, C M; Wu, W; Derylo, G E; Ruschman, M K; Lipton, R J; Whitbeck, A J; Schmitt, R; Contreras pasuy, L C; Olsen, J T; Cavanaugh, R J; Betts, R R; Wang, H; Sturdy, J T; Gutierrez jr, A; Campagnari, C F; White, D T; Brewer, F D; Qu, H; Ranjan, K; Lalwani, K; Md, H; Shah, A H; Fonseca de souza, S; De jesus damiao, D; Revoredo, E A; Chinellato, J A; Amadei marques da costa, C; Lampen, P T; Wendland, L A; Brom, J; Andrea, J; Tavernier, S; Van doninck, W K; Van mulders, P K A; Clerbaux, B; Rougny, R; Rashevski, G D; Rodozov, M N; Padula, S; Bernardes, C A; Dias maciel, C; Deiters, K; Feichtinger, D; Wiederkehr, S A; Cerrada, M; Fouz iglesias, M; Senghi soares, M; Pasquetto, E; Ferry, S C; Georgette, Z; Malcles, J; Csanad, M; Lal, M K; Walia, G; Kaur, A; Ciulli, V; Lenzi, P; Zanetti, M; Costa, M; Dughera, G; Bartosik, N; Ramirez sanchez, G; Frueboes, T M; Karjavine, V; Skachkov, N; Litvinenko, A; Petrosyan, A; Teryaev, O; Trofimov, V; Makankin, A; Golunov, A; Savrin, V; Korotkikh, V; Vardanyan, I; Lukina, O; Belyaev, A; Korneeva, N; Petukhov, V; Skvortsov, V; Konstantinov, D; Efremov, V; Smirnov, N; Shiu, J; Chen, P; Rohlf, J; Sulak, L R; St john, J M; Morse, D M; Krajczar, K F; Mironov, C M; Niu, X; Wang, J; Charaf, O; Matveev, M; Eppley, G W; Mccliment, E R; Ozok, F; Bilki, B; Zieser, A J; Olivito, D J; Wood, J G; Hashemi, B T; Bean, A L; Wang, Q; Tuo, S; Xu, Q; Roberts, J W; Anderson, D J; Lath, A; Jacques, P; Sun, M; Andrews, M B; Svyatkovskiy, A; Hardenbrook, J R; Heintz, U; Lee, J; Wang, L; Prosper, H B; Adams, J R; Liu, S; Wang, D; Swanson, D; Thiltges, J F; Undleeb, S; Finger, M; Beuselinck, R; Rand, D T; Tapper, A D; Malik, S A; Lane, R C; Panagiotou, A; Diamantopoulou, M; Vourliotis, E; Mallios, S; Mondal, K; Bhattacharya, R; Bhowmik, D; Libby, J F; Azzurri, P; Foa, L; Tenchini, R; Verdini, P G; Ciampa, A; Radburn-smith, B C; Park, J; Swartz, M L; Sarica, U; Borcherding, F O; Barria, P; Goadhouse, S D; Xia, F; Joyce, M L; Belloni, A; Bouhali, O; Toback, D; Osipenkov, I L; Almes, G T; Walker, J W; Bylsma, B G; Lefeld, A J; Conway, J S; Flores, C S; Avery, P R; Terentyev, N; Barashko, V; Ryd, A P E; Tucker, J M; Heltsley, B K; Wittich, P; Riley, D S; Skinnari, L A; Chu, J Y; Ignatenko, M; Lindgren, M A; Saltzberg, D P; Peck, A N; Herve, A A M; Savin, A; Herndon, M F; Mason, W P; Martirosyan, S; Grahl, J; Hansen, P D; Saradhy, R; Mueller, C N; Planer, M D; Suh, I S; Hurtado anampa, K P; De barbaro, P J; Garcia-bellido alvarez de miranda, A A; Korjenevski, S K; Moolekamp, F E; Fallon, C T; Acosta castillo, J G; Gutay, L; Barker, A W; Gough, E; Poyraz, D; Verbeke, W L M; Beniozef, I S; Krasteva, R L; Winn, D R; Fenyvesi, A C; Makovec, A; Munro, C G; Sanchez cruz, S; Bernardino rodrigues, N A; Lokhovitskiy, A; Uribe estrada, C; Rebane, L; Racioppi, A; Kim, H; Kim, T; Puljak, I; Boyaryntsev, A; Saeed, M; Tanwir, S; Butt, U; Hussain, A; Nawaz, A; Khurshid, T; Imran, M; Sultan, A; Naeem, M; Kaadze, K; Modak, A; Taylor, R D; Kim, D; Grab, C; Nessi-tedaldi, F; Fischer, J; Manzoni, R A; Zagozdzinska-bochenek, A A; Berger, P; Reichmann, M P; Hashemi, M; Rezaei hosseinabadi, F; Paganoni, M; Farina, F M; Joshi, Y R; Avila bernal, C A; Cabrera mora, A L; Segura delgado, M A; Gonzalez hernandez, C F; Asavapibhop, B; U-ruekolan, S; Kim, G; Choi, M; Aly, S; El sawy, M; Castaneda hernandez, A M; Pinna, D; Shamdasani, J; Tavkhelidze, D; Hegde, V; Aziz, T; Sur, N; Sutar, B J; Karmakar, S; Ghete, V M; Dragicevic, M G; Brandstetter, J; Marques moraes, A; Molina insfran, J A; Aspell, P; Baillon, P; Barney, D; Honma, A; Pape, L; Sakulin, H; Macpherson, A L; Bangert, N; Guida, R; Steggemann, J; Voutsinas, G G; Da silva gomes, D; Ben mimoun bel hadj, F; Bonnaud, J Y R; Canelli, F M; Bai, J; Qiu, J; Bian, J; Cheng, Y; Kukulies, C; Teroerde, M; Erdmann, M; Hebbeker, T; Zantis, F; Scheuch, F; Erdogan, Y; Campbell, A J; Kasemann, M; Lange, W; Raspiareza, A; Melzer-pellmann, I; Aldaya martin, M; Lewendel, B; Schmidt, R S; Lipka, E; Missiroli, M; Grados luyando, J M; Shevchenko, R; Babounikau, I; Steinbrueck, G; Vanhoefer, A; Ebrahimi, A; Pena rodriguez, K J; Niedziela, M A; Eich, M M; Froehlich, A; Simonis, H J; Katkov, I; Wozniewski, S; Marco de lucas, R J; Lopez virto, A M; Jaramillo echeverria, R W; Hennion, P; Zghiche, A; Chiron, A; Romanteau, T; Beaudette, F; Lobanov, A; Grasseau, G J; Pierre-emile, T B; El mamouni, H; Gouzevitch, M; Goldstein, J; Cussans, D G; Seif el nasr, S A; Titterton, A S; Ford, P J W; Olaiya, E O; Salisbury, J G; Paspalaki, G; Asenov, P; Hidas, P; Kiss, T N; Zalan, P; Shukla, P; Abbrescia, M; De filippis, N; Donvito, G; Radogna, R; Miniello, G; Gelmi, A; Capiluppi, P; Marcellini, S; Odorici, F; Bonacorsi, D; Genta, C; Ferri, G; Saviano, G; Ferrini, M; Minutoli, S; Tosi, S; Lista, L; Passeggio, G; Breglio, G; Merola, M; Diemoz, M; Rahatlou, S; Baccaro, S; Bartoloni, A; Talamo, I G; Cipriani, M; Kim, J Y; Oh, G; Lim, J H; Lee, J; Mohamad idris, F B; Gani, A B; Cwiok, M; Doroba, K; Martins galinhas, B E; Kim, V; Krivshich, A; Vorobyev, A; Ivanov, Y; Tarakanov, V; Lobodenko, A; Obikhod, T; Isayev, O; Kurov, O; Leonidov, A; Lvova, N; Kirsanov, M; Suvorova, O; Karneyeu, A; Demidov, S; Konoplyannikov, A; Popov, V; Pakhlov, P; Vinogradov, S; Klemin, S; Blinov, V; Skovpen, I; Chatrchyan, S; Grigorian, N; Kayis topaksu, A; Sunar cerci, D; Hos, I; Guler, Y; Kiminsu, U; Serin, M; Deniz, M; Turan, I; Eryol, F; Pozdnyakov, A; Liu, Z; Doan, T H; Hanlon, J E; Mcbride, P L; Pal, I; Garren, L; Oleynik, G; Harris, R M; Bolla, G; Kowalkowski, J B; Evans, D E; Vaandering, E W; Patrick, J F; Rechenmacher, R; Prosser, A G; Messer, T A; Tiradani, A R; Rivera, R A; Jayatilaka, B A; Duarte, J M; Todri, A; Harr, R F; Richman, J D; Bhandari, R; Dordevic, M; Cirkovic, P; Mora herrera, C; Rosa lopes zachi, A; De paula carvalho, W; Kinnunen, R L A; Lehti, S T; Maeenpaeae, T H; Bloch, D; Chabert, E C; Rudolf, N G; Devroede, O; Skovpen, K; Lontkovskyi, D; De wolf, E A; Van mechelen, P; Van spilbeeck, A B E; Georgiev, L S; Novaes, S F; Costa, M A; Costa leal, B; Horisberger, R P; De la cruz, B; Willmott, C; Perez-calero yzquierdo, A M; Dejardin, M M; Mehta, A; Barbagli, G; Focardi, E; Bacchetta, N; Gasparini, U; Pantano, D; Sgaravatto, M; Ventura, S; Zotto, P; Candelori, A; Pozzobon, N; Boletti, A; Servoli, L; Postolache, V; Rossi, A; Ciangottini, D; Alunni solestizi, L; Maselli, S; Migliore, E; Amapane, N C; Lopez fernandez, R; Sanchez hernandez, A; Heredia de la cruz, I; Matveev, V; Kracikova, T; Shmatov, S; Vasilev, S; Kurenkov, A; Oleynik, D; Verkheev, A; Voytishin, N; Proskuryakov, A; Bogdanova, G; Petrova, E; Bagaturia, I; Tsamalaidze, Z; Zhao, Z; Arcaro, D J; Barberis, E; Wamorkar, T; Wang, B; Ralph, D K; Velasco, M M; Odell, N J; Sevova, S; Li, W; Merlo, J; Onel, Y; Mermerkaya, H; Moeller, A R; Haytmyradov, M; Dong, R; Bugg, W M; Ragghianti, G C; Delannoy sotomayor, A G; Thapa, K; Yagil, A; Gerosa, R A; Masciovecchio, M; Schmitz, E J; Kapustinsky, J S; Greene, S V; Zhang, L; Vlimant, J V; Mughal, A; Cury siqueira, S; Gershtein, Y; Arora, S R R; Lin, W X; Stickland, D P; Mc donald, K T; Pivarski, J M C; Lucchini, M T; Higginbotham, S L; Rosenfield, M; Long, O R; Johnson, K F; Adams, T; Susa, T; Rykaczewski, H; Ioannou, A; Ge, Y; Levin, A M; Li, J; Li, L; Bloom, K A; Monroy montanez, J A; Kunori, S; Wang, Z; Favart, D; Maltoni, F; Vidal marono, M; Delcourt, M; Markov, S I; Seez, C; Richards, A J; Ferguson, W; Chatziangelou, M; Karathanasis, G; Kontaxakis, P; Jones, J A; Strologas, J; Katsoulis, P; Dutt, S; Roy chowdhury, S; Bhardwaj, R; Purohit, A; Singh, B; Behera, P K; Sharma, A; Spagnolo, P; Tonelli, G E; Giannini, L; Poulios, S; Groote, J F; Untuc, B; Oztirpan, F O; Koseoglu, I; Luiggi lopez, E E; Hadley, N J; Shin, Y H; Safonov, A; Eusebi, R; Rose, A K; Overton, D A; Erbacher, R D; Funk, G N; Pilot, J R; Regnery, B J; Klimenko, S; Matchev, K; Gleyzer, S; Wang, J; Cadamuro, L; Sun, W M; Soffi, L; Lantz, S R; Wright, D; Cline, D; Cousins jr, R D; Erhan, S; Yang, X; Schnaible, C J; Dasgupta, A; Loveless, R; Bradley, D C; Monzat, D; Dodd, L M; Tikalsky, J L; Kapusta, J; Gilbert, W J; Lesko, Z J; Marinelli, N; Wayne, M R; Heering, A H; Galanti, M; Duh, Y; Roy, A; Arabgol, M; Hacker, T J; Salva, S; Petrov, V; Barychevski, V; Drobychev, G; Lobko, A; Gabusi, M; Fabris, L; Conte, E R E; Kasprowicz, G H; Kyberd, P; Cole, J E; Lopez, J M; Salazar gonzalez, C A; Benzon, A M; Pelagio, L; Walsh, M F; Postnov, A; Lelas, D; Vaitkus, J V; Jurciukonis, D; Sulmanas, B; Ahmad, A; Ahmed, W; Jalil, S H; Kahl, W E; Taylor, D R; Choi, Y I; Jeong, Y; Roy, T; Schoenenberger, M A; Khateri, P; Etesami, S M; Fiorini, E; Pullia, A; Magni, S; Gennai, S; Fiorendi, S; Zuolo, D; Sanabria arenas, J C; Florez bustos, C A; Holguin coral, A; Mendez, H; Srimanobhas, N; Jaikar, A H; Arteche gonzalez, F J; Call, K R; Vazquez valencia, E F; Calderon monroy, M A; Abdelmaguid, A; Mal, P K; Yuan, L; Lomidze, I; Prangishvili, I; Adamov, G; Dube, S S; Dugad, S; Mohanty, G B; Bhat, M A; Bheesette, S; Malawski, M L; Abou kors, D J

    CMS is a general purpose proton-proton detector designed to run at the highest luminosity at the LHC. It is also well adapted for studies at the initially lower luminosities. The CMS Collaboration consists of over 1800 scientists and engineers from 151 institutes in 31 countries. The main design goals of CMS are: \\begin{enumerate} \\item a highly performant muon system, \\item the best possible electromagnetic calorimeter \\item high quality central tracking \\item hermetic calorimetry \\item a detector costing less than 475 MCHF. \\end{enumerate} All detector sub-systems have started construction. Engineering Design Reviews of parts of these sub-systems have been successfully carried-out. These are held prior to granting authorization for purchase. The schedule for the LHC machine and the experiments has been revised and CMS will be ready for first collisions now expected in April 2006. \\\\\\\\ ~~~~$\\bullet$ Magnet \\\\ The detector (see Figure) will be built around a long (13~m) and large bore ($\\phi$=5.9~m) high...

  18. Status of the CMS Phase 1 Pixel Upgrade

    CERN Document Server

    Mattig, Stefan

    2014-01-01

    The silicon pixel detector is the innermost component of the CMS tracking system, providing high precision space point measurements of charged particle trajectories. Before 2018 the instantaneous luminosity of the LHC is expected to reach 2\\,$\\times 10^{34}\\,{\\rm cm^{-2}s^{-1}}$, which will significantly increase the number of interactions per bunch crossing. The current pixel detector of CMS was not designed to work efficiently in such a high occupancy environment and will be degraded by substantial data-loss introduced by buffer filling in the analog Read-Out Chip (ROC) and effects of radiation damage in the sensors, built up over the operational period. To maintain a high tracking efficiency, CMS has planned to replace the current pixel system during ``Phase 1'' (2016/17) by a new lightweight detector, equipped with an additional 4th layer in the barrel, and one additional forward/backward disk. A new digital ROC has been designed, with increased buffers to minimize data-loss, and a digital read-out protoc...

  19. The CMS conductor

    CERN Document Server

    Horváth, I L; Marti, H P; Neuenschwander, J; Smith, R P; Fabbricatore, P; Musenich, R; Calvo, A; Campi, D; Curé, B; Desirelli, Alberto; Favre, G; Riboni, P L; Sgobba, Stefano; Tardy, T; Sequeira-Lopes-Tavares, S

    2000-01-01

    The Compact Muon Solenoid (CMS) is one of the experiments, which are being designed in the framework of the Large Hadron Collider (LHC) project at CERN, the design field of the CMS magnet is 4 T, the magnetic length is 13 m and the aperture is 6 m. This high magnetic field is achieved by means of a 4 layer, 5 modules superconducting coil. The coil is wound from an Al-stabilized Rutherford type conductor. The nominal current of the magnet is 20 kA at 4.5 K. In the CMS coil the structural function is ensured, unlike in other existing Al-stabilized thin solenoids, both by the Al-alloy reinforced conductor and the external former. In this paper the retained manufacturing process of the 50-km long reinforced conductor is described. In general the Rutherford type cable is surrounded by high purity aluminium in a continuous co-extrusion process to produce the Insert. Thereafter the reinforcement is joined by Electron Beam Welding to the pure Al of the insert, before being machined to the final dimensions. During the...

  20. Experience in using commercial clouds in CMS

    Energy Technology Data Exchange (ETDEWEB)

    Bauerdick, L. [Fermilab; Bockelman, B. [Nebraska U.; Dykstra, D. [Fermilab; Fuess, S. [Fermilab; Garzoglio, G. [Fermilab; Girone, M. [CERN; Gutsche, O. [Fermilab; Holzman, B. [Fermilab; Hugnagel, D. [Fermilab; Kim, H. [Fermilab; Kennedy, R. [Fermilab; Mason, D. [Fermilab; Spentzouris, P. [Fermilab; Timm, S. [Fermilab; Tiradani, A. [Fermilab; Vaandering, E. [Fermilab

    2017-10-03

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  1. Multiple Tier Fuel Cycle Studies for Waste Transmutation

    International Nuclear Information System (INIS)

    Hill, R.N.; Taiwo, T.A.; Stillman, J.A.; Graziano, D.J.; Bennett, D.R.; Trellue, H.; Todosow, M.; Halsey, W.G.; Baxter, A.

    2002-01-01

    As part of the U.S. Department of Energy Advanced Accelerator Applications Program, a systems study was conducted to evaluate the transmutation performance of advanced fuel cycle strategies. Three primary fuel cycle strategies were evaluated: dual-tier systems with plutonium separation, dual-tier systems without plutonium separation, and single-tier systems without plutonium separation. For each case, the system mass flow and TRU consumption were evaluated in detail. Furthermore, the loss of materials in fuel processing was tracked including the generation of new waste streams. Based on these results, the system performance was evaluated with respect to several key transmutation parameters including TRU inventory reduction, radiotoxicity, and support ratio. The importance of clean fuel processing (∼0.1% losses) and inclusion of a final tier fast spectrum system are demonstrated. With these two features, all scenarios capably reduce the TRU and plutonium waste content, significantly reducing the radiotoxicity; however, a significant infrastructure (at least 1/10 the total nuclear capacity) is required for the dedicated transmutation system. (authors)

  2. The CMS Higgs Boson Goose Game

    CERN Document Server

    Cavallo, Francesca Romana

    2015-01-01

    Building and operating the CMS Detector is a complicated endeavour! Now, more than 20 years after the detector was conceived, the CMS Bologna group proposes to follow the steps of this challenging project by playing The Higgs Boson Goose Game, illustrating CMS activities and goals.The concept of the game is inspired by the traditional Game of the Goose. The underlying idea is that the progress of building and operating a detector at the LHC is similar to the progress of the pawns on the game board it is fast at times, bringing rewards and satisfaction, while sometimes unexpected problems cause delays or even a step back requiring CMS scientists to use all of their skill and creativity to devise new solutions.

  3. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    International Nuclear Information System (INIS)

    Evans, D; Fisk, I; Holzman, B; Pordes, R; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely 'on-demand' as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the 'base-line' needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  4. Tenet: An Architecture for Tiered Embedded Networks

    OpenAIRE

    Ramesh Govindan; Eddie Kohler; Deborah Estrin; Fang Bian; Krishna Chintalapudi; Om Gnawali; Sumit Rangwala; Ramakrishna Gummadi; Thanos Stathopoulos

    2005-01-01

    Future large-scale sensor network deployments will be tiered, with the motes providing dense sensing and a higher tier of 32-bit master nodes with more powerful radios providing increased overall network capacity. In this paper, we describe a functional architecture for wireless sensor networks that leverages this structure to simplify the overall system. Our Tenet architecture has the nice property that the mote-layer software is generic and reusable, and all application functionality reside...

  5. Professional Development to Differentiate Kindergarten Tier 1 Instruction: Can Already Effective Teachers Improve Student Outcomes by Differentiating Tier 1 Instruction?

    Science.gov (United States)

    Al Otaiba, Stephanie; Folsom, Jessica S.; Wanzek, Jeanne; Greulich, Luana; Waesche, Jessica; Schatschneider, Christopher; Connor, Carol M.

    2016-01-01

    Two primary purposes guided this quasi-experimental within-teacher study: (a) to examine changes from baseline through 2 years of professional development (Individualizing Student Instruction) in kindergarten teachers' differentiation of Tier 1 literacy instruction; and (b) to examine changes in reading and vocabulary of 3 cohorts of the teachers'…

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  7. Performance of the Pixel Luminosity Telescope for Luminosity Measurement at CMS during Run2

    CERN Document Server

    Lujan, Paul Joseph

    2017-01-01

    The Pixel Luminosity Telescope (PLT) is a dedicated system for luminosity measurement at the CMS experiment using silicon pixel sensors arranged into telescopes, each consisting of three sensor planes. It was installed in CMS at the beginning of 2015 and has been providing online and offline luminosity measurements throughout Run 2 of the LHC. The online bunch-by-bunch luminosity measurement employs the fast-or capability of the pixel readout chip to identify events where a hit is registered in all three sensors in a telescope, corresponding primarily to tracks originating from the interaction point. In addition, the full pixel information is read out at a lower rate, allowing for the calculation of corrections to the online luminosity from effects such as the miscounting of tracks not originating from the interaction point and detector efficiency. This paper presents results from the 2016 running of the PLT, including commissioning and operational history, luminosity calibration using Van der Meer scans, and...

  8. 42 CFR 438.724 - Notice to CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Notice to CMS. 438.724 Section 438.724 Public...) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE Sanctions § 438.724 Notice to CMS. (a) The State must give the CMS Regional Office written notice whenever it imposes or lifts a sanction for one of the violations...

  9. Dark Matter Benchmark Models for Early LHC Run-2 Searches: Report of the ATLAS/CMS Dark Matter Forum

    CERN Document Server

    Abercrombie, Daniel; Akilli, Ece; Alcaraz Maestre, Juan; Allen, Brandon; Alvarez Gonzalez, Barbara; Andrea, Jeremy; Arbey, Alexandre; Azuelos, Georges; Azzi, Patrizia; Backovic, Mihailo; Bai, Yang; Banerjee, Swagato; Beacham, James; Belyaev, Alexander; Boveia, Antonio; Brennan, Amelia Jean; Buchmueller, Oliver; Buckley, Matthew R.; Busoni, Giorgio; Buttignol, Michael; Cacciapaglia, Giacomo; Caputo, Regina; Carpenter, Linda; Filipe Castro, Nuno; Gomez Ceballos, Guillelmo; Cheng, Yangyang; Chou, John Paul; Cortes Gonzalez, Arely; Cowden, Chris; D'Eramo, Francesco; De Cosa, Annapaola; De Gruttola, Michele; De Roeck, Albert; De Simone, Andrea; Deandrea, Aldo; Demiragli, Zeynep; DiFranzo, Anthony; Doglioni, Caterina; du Pree, Tristan; Erbacher, Robin; Erdmann, Johannes; Fischer, Cora; Flaecher, Henning; Fox, Patrick J.; Fuks, Benjamin; Genest, Marie-Helene; Gomber, Bhawna; Goudelis, Andreas; Gramling, Johanna; Gunion, John; Hahn, Kristian; Haisch, Ulrich; Harnik, Roni; Harris, Philip C.; Hoepfner, Kerstin; Hoh, Siew Yan; Hsu, Dylan George; Hsu, Shih-Chieh; Iiyama, Yutaro; Ippolito, Valerio; Jacques, Thomas; Ju, Xiangyang; Kahlhoefer, Felix; Kalogeropoulos, Alexis; Kaplan, Laser Seymour; Kashif, Lashkar; Khoze, Valentin V.; Khurana, Raman; Kotov, Khristian; Kovalskyi, Dmytro; Kulkarni, Suchita; Kunori, Shuichi; Kutzner, Viktor; Lee, Hyun Min; Lee, Sung-Won; Liew, Seng Pei; Lin, Tongyan; Lowette, Steven; Madar, Romain; Malik, Sarah; Maltoni, Fabio; Martinez Perez, Mario; Mattelaer, Olivier; Mawatari, Kentarou; McCabe, Christopher; Megy, Theo; Morgante, Enrico; Mrenna, Stephen; Narayanan, Siddharth M.; Nelson, Andy; Novaes, Sergio F.; Padeken, Klaas Ole; Pani, Priscilla; Papucci, Michele; Paulini, Manfred; Paus, Christoph; Pazzini, Jacopo; Penning, Bjorn; Peskin, Michael E.; Pinna, Deborah; Procura, Massimiliano; Qazi, Shamona F.; Racco, Davide; Re, Emanuele; Riotto, Antonio; Rizzo, Thomas G.; Roehrig, Rainer; Salek, David; Sanchez Pineda, Arturo; Sarkar, Subir; Schmidt, Alexander; Schramm, Steven Randolph; Shepherd, William; Singh, Gurpreet; Soffi, Livia; Srimanobhas, Norraphat; Sung, Kevin; Tait, Tim M.P.; Theveneaux-Pelzer, Timothee; Thomas, Marc; Tosi, Mia; Trocino, Daniele; Undleeb, Sonaina; Vichi, Alessandro; Wang, Fuquan; Wang, Lian-Tao; Wang, Ren-Jie; Whallon, Nikola; Worm, Steven; Wu, Mengqing; Wu, Sau Lan; Yang, Hongtao; Yang, Yong; Yu, Shin-Shan; Zaldivar, Bryan; Zanetti, Marco; Zhang, Zhiqing; Zucchetta, Alberto

    2015-01-01

    This document is the final report of the ATLAS-CMS Dark Matter Forum, a forum organized by the ATLAS and CMS collaborations with the participation of experts on theories of Dark Matter, to select a minimal basis set of dark matter simplified models that should support the design of the early LHC Run-2 searches. A prioritized, compact set of benchmark models is proposed, accompanied by studies of the parameter space of these models and a repository of generator implementations. This report also addresses how to apply the Effective Field Theory formalism for collider searches and present the results of such interpretations.

  10. Operation and Performance of the CMS Outer Tracker

    CERN Document Server

    Butz, Erik Manuel

    2017-01-01

    The CMS Silicon Strip Tracker with its more than 15000 silicon modules and 200\\,m$^2$ of active silicon area has been running together with the other subsystems of CMS for several years. We present the performance of the detector in the LHC Run 2 data taking. Results for signal-to-noise, hit efficiency and single hit resolution will be presented. We review the behavior of the system when running at beyond-design instantaneous luminosity and describe challenges observed under these conditions. The evolution of detector parameters under the influence of radiation damage will be presented and compared to simulations.

  11. Forward physics with CMS

    CERN Document Server

    Grothe, Monika

    2008-01-01

    Forward physics with CMS at the LHC covers a wide range of physics subjects, including very low-x_Bj QCD, underlying event and multiple interactions characteristics, gamma-mediated processes, shower development at the energy scale of primary cosmic ray interactions with the atmosphere, diffraction in the presence of a hard scale and even MSSM Higgs discovery in central exclusive production. Selected feasibility studies to illustrate the forward physics potential of CMS are presented.

  12. CMS: Present status, limitations, and upgrade plans

    International Nuclear Information System (INIS)

    Cheung, H.W.K.

    2011-01-01

    An overview of the CMS upgrade plans will be presented. A brief status of the CMS detector will be given, covering some of the issues we have so far experienced. This will be followed by an overview of the various CMS upgrades planned, covering the main motivations for them, and the various R and D efforts for the possibilities under study. The CMS detector has been working extremely well since the start of data-taking at the LHC as is evidenced by the numerous excellent results published by CMS and presented at this workshop and recent conferences. Less well documented are the various issues that have been encountered with the detector. In the spirit of this workshop I will cover some of these issues with particular emphasis on problems that motivate some of the upgrades to the CMS detector for this decade of data-taking. Though the CMS detector has been working extremely well and expectations are great for making the most of the LHC luminosity, there have been a number of issues encountered so far. Some of these have been described and while none currently presents a problem for physics performance, some of them are expected to become more problematic, especially at the highest Phase 1 luminosities for which the majority of the integrated luminosity will be collected. These motivate upgrades for various parts of the CMS detector so that the current excellent physics performance can be maintained or even surpassed in the realm of the highest Phase 1 luminosities.

  13. Heavy-flavour jet identification at the CMS experiment for Run 2

    CERN Document Server

    Verzetti, Mauro

    2016-01-01

    A review of these new developments is presented, together with their performance on proton-proton collision data recorded by the CMS detector at a centre-of-mass energy of $\\sqrt{s} = 13$ TeV during 2015 and 2016.

  14. A four-tier problem-solving scaffold to teach pain management in dental school.

    Science.gov (United States)

    Ivanoff, Chris S; Hottel, Timothy L

    2013-06-01

    Pain constitutes a major reason patients pursue dental treatment. This article presents a novel curriculum to provide dental students comprehensive training in the management of pain. The curriculum's four-tier scaffold combines traditional and problem-based learning to improve students' diagnostic, pharmacotherapeutic, and assessment skills to optimize decision making when treating pain. Tier 1 provides underpinning knowledge of pain mechanisms with traditional and contextualized instruction by integrating clinical correlations and studying worked cases that stimulate clinical thinking. Tier 2 develops critical decision making skills through self-directed learning and actively solving problem-based cases. Tier 3 exposes students to management approaches taken in allied health fields and cultivates interdisciplinary communication skills. Tier 4 provides a "knowledge and experience synthesis" by rotating students through community pain clinics to practice their assessment skills. This combined teaching approach aims to increase critical thinking and problem-solving skills to assist dental graduates in better management of pain throughout their careers. Dental curricula that have moved to comprehensive care/private practice models are well-suited for this educational approach. The goal of this article is to encourage dental schools to integrate pain management into their curricula, to develop pain management curriculum resources for dental students, and to provide leadership for change in pain management education.

  15. Wire Bonding on 2S Modules of the Phase-2 CMS Detector

    CERN Document Server

    AUTHOR|(CDS)2226525; Pooth, Oliver

    The LHC will be upgraded to the HL-LHC in the Long Shutdown 3 starting 2024. This upgrade will increase the collision rate and the overall number of colliding particles requiring high precision particle detectors which are able to cope with much higher radiation doses and numbers of particle interactions per bunch crossing. To fulfill these technical requirements the CMS detector will be upgraded in the so-called Phase-2 Upgrade. Among others the silicon tracking system will be completely replaced by a new system providing a higher acceptance, an improved granularity and the feature to include its tracking information into the level-1 trigger. The new outer-tracker will consist of so called 2S modules consisting of two strip sensors and PS modules with a macro-pixel sensor and a strip sensor. The electrical connection between the strip sensors and the front-end electronics is realized by thin aluminum wire bonds. In this thesis the process of wire bonding is introduced and its implementation in the 2S module ...

  16. The CMS Event Builder

    CERN Document Server

    Brigljevic, V; Cano, E; Cittolin, Sergio; Csilling, Akos; Gigi, D; Glege, F; Gómez-Reino, Robert; Gulmini, M; Gutleber, J; Jacobs, C; Kozlovszky, Miklos; Larsen, H; Magrans de Abril, Ildefons; Meijers, F; Meschi, E; Murray, S; Oh, A; Orsini, L; Pollet, L; Rácz, A; Samyn, D; Scharff-Hansen, P; Schwick, C; Sphicas, Paris; ODell, V; Suzuki, I; Berti, L; Maron, G; Toniolo, N; Zangrando, L; Ninane, A; Erhan, S; Bhattacharya, S; Branson, J G

    2003-01-01

    The data acquisition system of the CMS experiment at the Large Hadron Collider will employ an event builder which will combine data from about 500 data sources into full events at an aggregate throughput of 100 GByte/s. Several architectures and switch technologies have been evaluated for the DAQ Technical Design Report by measurements with test benches and by simulation. This paper describes studies of an EVB test-bench based on 64 PCs acting as data sources and data consumers and employing both Gigabit Ethernet and Myrinet technologies as the interconnect. In the case of Ethernet, protocols based on Layer-2 frames and on TCP/IP are evaluated. Results from ongoing studies, including measurements on throughput and scaling are presented. The architecture of the baseline CMS event builder will be outlined. The event builder is organised into two stages with intelligent buffers in between. The first stage contains 64 switches performing a first level of data concentration by building super-fragments from fragmen...

  17. Award for the best CMS thesis

    CERN Multimedia

    2003-01-01

    The 2002 CMS PhD Thesis Award for has been presented to Giacomo Luca Bruno for his thesis defended at the University of Pavia in Italy and entitled "The RPC detectors and the muon system for the CMS experiment at the LHC". His work was supervised by Sergio P. Ratti from the University of Pavia. Since April 2002, Giacomo has been employed as a research fellow by CERN's EP Division. He continues to work on CMS in the areas of data acquisition and physics reconstruction and selection. Last Monday he received a commemorative engraved plaque from Lorenzo Foà, chairman of the CMS Collaboration Board. He will also receive expenses paid to an international physics conference to present his thesis results. Giacomo Luca Bruno with Lorenzo Foà

  18. The CMS High Level Trigger System

    CERN Document Server

    Afaq, A; Bauer, G; Biery, K; Boyer, V; Branson, J; Brett, A; Cano, E; Carboni, A; Cheung, H; Ciganek, M; Cittolin, S; Dagenhart, W; Erhan, S; Gigi, D; Glege, F; Gómez-Reino, Robert; Gulmini, M; Gutiérrez-Mlot, E; Gutleber, J; Jacobs, C; Kim, J C; Klute, M; Kowalkowski, J; Lipeles, E; Lopez-Perez, Juan Antonio; Maron, G; Meijers, F; Meschi, E; Moser, R; Murray, S; Oh, A; Orsini, L; Paus, C; Petrucci, A; Pieri, M; Pollet, L; Rácz, A; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Sexton-Kennedy, E; Sumorok, K; Suzuki, I; Tsirigkas, D; Varela, J

    2007-01-01

    The CMS Data Acquisition (DAQ) System relies on a purely software driven High Level Trigger (HLT) to reduce the full Level-1 accept rate of 100 kHz to approximately 100 Hz for archiving and later offline analysis. The HLT operates on the full information of events assembled by an event builder collecting detector data from the CMS front-end systems. The HLT software consists of a sequence of reconstruction and filtering modules executed on a farm of O(1000) CPUs built from commodity hardware. This paper presents the architecture of the CMS HLT, which integrates the CMS reconstruction framework in the online environment. The mechanisms to configure, control, and monitor the Filter Farm and the procedures to validate the filtering code within the DAQ environment are described.

  19. The CMS Data Analysis School Experience

    Energy Technology Data Exchange (ETDEWEB)

    De Filippis, N. [INFN, Bari; Bauerdick, L. [Fermilab; Chen, J. [Taiwan, Natl. Taiwan U.; Gallo, E. [DESY; Klima, B. [Fermilab; Malik, S. [Puerto Rico U., Mayaguez; Mulders, M. [CERN; Palla, F. [INFN, Pisa; Rolandi, G. [Pisa, Scuola Normale Superiore

    2017-11-21

    The CMS Data Analysis School is an official event organized by the CMS Collaboration to teach students and post-docs how to perform a physics analysis. The school is coordinated by the CMS schools committee and was first implemented at the LHC Physics Center at Fermilab in 2010. As part of the training, there are a number of “short” exercises on physics object reconstruction and identification, Monte Carlo simulation, and statistical analysis, which are followed by “long” exercises based on physics analyses. Some of the long exercises go beyond the current state of the art of the corresponding CMS analyses. This paper describes the goals of the school, the preparations for a school, the structure of the training, and student satisfaction with the experience as measured by surveys.

  20. CMS Data Analysis: Current Status and Future Strategy

    CERN Document Server

    Innocente, V

    2003-01-01

    We present the current status of CMS data analysis architecture and describe work on future Grid-based distributed analysis prototypes. CMS has two main software frameworks related to data analysis: COBRA, the main framework, and IGUANA, the interactive visualisation framework. Software using these frameworks is used today in the world-wide production and analysis of CMS data. We describe their overall design and present examples of their current use with emphasis on interactive analysis. CMS is currently developing remote analysis prototypes, including one based on Clarens, a Grid-enabled client-server tool. Use of the prototypes by CMS physicists will guide us in forming a Grid-enriched analysis strategy. The status of this work is presented, as is an outline of how we plan to leverage the power of our existing frameworks in the migration of CMS software to the Grid.

  1. Calibration by precise charge injection of a sub-detector of CMS; Calibration par injection de charge du calorimetre electromagnetique de CMS

    Energy Technology Data Exchange (ETDEWEB)

    Yong-Wook Baek

    2001-01-26

    This thesis was carried out within the framework of the international collaboration which has the responsibility of the experience CMS (Compact Muon Solenoid) on LHC, at CERN. The physics of the fundamental particles which will be explored by this experiment is described within the standard model. The configuration of sub-detector of CMS is briefly described, with a particular weight on the read-out chain of the electromagnetic calorimeter. The work carried out to calibrate this chain by a precise charge injection at the input of preamplifiers is described. The 4 integrated circuits CTRL, TPLS, DAC, and injector which will constitute the components of this chain of calibration are described. The circuit of injection, which is the main circuit in this project, was imagined and developed at the laboratory in DMILL technology. This injector generates a signal which has a form identical to the signal of the detector. The measurements on the linearity of the injectors are presented. In order to know its behavior under real conditions (flow of neutrons {approx} 2 x 10{sup 13} neutrons/cm{sup 2}/10 years) where this circuit is installed in detector CMS, we submitted the prototypes of injector to irradiation and the results are summarized. The research and development on this circuit produced an integrated circuit hardened to irradiations, whose variation of slope is lower than 0.25% for an integrated of 2 x 10{sup 13} neutrons/cm{sup 2} and indestructible under 10{sup 15} neutrons/cm{sup 2}. This circuit has satisfactory qualities to be assembled on the electronic card which will treat the data of calorimeter ECAL of CMS. (author)

  2. b-physics with ATLAS and CMS

    International Nuclear Information System (INIS)

    Oakes, L.

    2014-01-01

    The ATLAS and CMS b-physics programmes are summarized after nearly 2 years of data taking. The data were collected in √(s)=7 TeV proton-proton collision at the LHC. Results presented include B meson lifetime measurements using 40 pb -1 of 2010 data, which demonstrate good agreement with previous measurements, and competitive rare decay studies using the full 2011 data set of up to 5 fb -1 . ATLAS measures a B s 0 meson lifetime of [1.4 ± 0.08(stat) ± 0.05(syst)] ps in the mode B s 0 → J/ψφ. The CMS experiment finds a lifetime of [1.59 ± 0.08(stat)] ps

  3. CMS launches new educational tools

    CERN Document Server

    Corinne Pralavorio

    2014-01-01

    On 5 and 11 November, almost 90 pupils from the Fermi scientific high school in Livorno, Italy, took part in two Masterclass sessions organised by CMS.   CMS Masterclass participants.  The pupils took over a hall at CERN for an afternoon to test a new software tool called CIMA (CMS Instrument for Masterclass Analysis) for the first time. The software simplifies the process of recording results and reduces the number of steps required to enter data. During the exercise, each group of pupils had to analyse about a hundred events from the LHC. For each event, the budding physicists determined whether what they saw was a candidate W boson, Z boson or Higgs boson, identified the decay mode and entered key data. At the end of the analysis, they used the results to reconstruct a mass diagram. CIMA was developed by a team of scientists from the University of Aachen, Germany, the University of Notre-Dame, United States, and CERN. CMS has also added yet another educational tool to its already l...

  4. The CMS Pixel Detector Upgrade and R\\&D for the High Luminosity LHC

    CERN Document Server

    Viliani, Lorenzo

    2017-01-01

    The High Luminosity Large Hadron Collider (HL-LHC) at CERN is expected to collide protons at a centre-of-mass energy of 14\\,TeV and to reach an unprecedented peak instantaneous luminosity of $5 \\times 10^{34}\\,{\\rm cm}^{-2} {\\rm s}^{-1}$ with an average number of pileup events of 140. This will allow the ATLAS and CMS experiments to collect integrated luminosities of up to $3000\\,{\\rm fb}^{-1}$ during the project lifetime. To cope with this extreme scenario the CMS detector will be substantially upgraded before starting the HL-LHC, a plan known as CMS Phase-2 Upgrade. In the upgrade the entire CMS silicon pixel detector will be replaced and the new detector will feature increased radiation hardness, higher granularity and capability to handle higher data rate and longer trigger latency. In this report the Phase-2 Upgrade of the CMS silicon pixel detector will be reviewed, focusing on the features of the detector layout and on the development of new pixel devices.

  5. The CMS Electronic Logbook

    CERN Multimedia

    Bukowiec, S; Beccati, B; Behrens, U; Biery, K; Branson, J; Cano, E; Cheung, H; Ciganek, M; Cittolin, S; Coarasa Perez, J A; Deldicque, C; Erhan, S; Gigi, D; Glege, F; Gomez-Reino, R; Hatton, D; Hwong, Y L; Loizides, C; Ma, F; Masetti, L; Meijers, F; Meschi, E; Meyer, A; Mommsen, R K; Moser, R; O’Dell, V; Orsini, L; Paus, C; Petrucci, A; Pieri, M; Racz, A; Raginel, O; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Shpakov, D; Simon, M; Sumorok, K; Sungho Yoon, A

    2010-01-01

    The CMS ELogbook (ELog) is a collaborative tool, which provides a platform to share and store information about various events or problems occurring in the Compact Muon Solenoid (CMS) experiment at CERN during operation. The ELog is based on a Model–View–Controller (MVC) software architectural pattern and uses an Oracle database to store messages and attachments. The ELog is developed as a pluggable web component in Oracle Portal in order to provide better management, monitoring and security.

  6. Improving collaborative documentation in CMS

    International Nuclear Information System (INIS)

    Lassila-Perini, Kati; Salmi, Leena

    2010-01-01

    Complete and up-to-date documentation is essential for efficient data analysis in a large and complex collaboration like CMS. Good documentation reduces the time spent in problem solving for users and software developers. The scientists in our research environment do not necessarily have the interests or skills of professional technical writers. This results in inconsistencies in the documentation. To improve the quality, we have started a multidisciplinary project involving CMS user support and expertise in technical communication from the University of Turku, Finland. In this paper, we present possible approaches to study the usability of the documentation, for instance, usability tests conducted recently for the CMS software and computing user documentation.

  7. A Tiered Model for Linking Students to the Community

    Science.gov (United States)

    Meyer, Laura Landry; Gerard, Jean M.; Sturm, Michael R.; Wooldridge, Deborah G.

    2016-01-01

    A tiered practice model (introductory, pre-internship, and internship) embedded in the curriculum facilitates community engagement and creates relevance for students as they pursue a professional identity in Human Development and Family Studies. The tiered model integrates high-impact teaching practices (HIP) and student engagement pedagogies…

  8. Controlled overflowing of data-intensive jobs from oversubscribed sites

    CERN Document Server

    Sfiligoi, Igor; Bockelman, Brian Paul; Bradley, Daniel Charles; Tadel, Matevz; Bloom, Kenneth Arthur; Letts, James; Mrak Tadel, Alja

    2012-01-01

    The CMS analysis computing model was always relying on jobs running near the data, with data allocation between CMS compute centers organized at management level, based on expected needs of the CMS community. While this model provided high CPU utilization during job run times, there were times when a large fraction of CPUs at certain sites were sitting idle due to lack of demand, all while Terabytes of data were never accessed. To improve the utilization of both CPU and disks, CMS is moving toward controlled overflowing of jobs from sites that have data but are oversubscribed to others with spare CPU and network capacity, with those jobs accessing the data through real time Xrootd streaming over WAN. The major limiting factor for remote data access is the ability of the source storage system to serve such data, so the number of jobs accessing it must be carefully controlled. The CMS approach to this is to implement the overflowing by means of glideinWMS, a Condor based pilot system, and by providing the WMS w...

  9. VIP visit to CERN P5 CMS of Pakistan Science Members

    CERN Multimedia

    Hoch, Michael

    2012-01-01

    VIP visit to CERN P5 CMS of PAEC & JCPC Science Members List of PAEC Visitors: Dr. Badar Suleman - Member Science PAEC & Member of JCPC Dr. Waqar M. Butt - Member Engineering (Head of HMC3) Dr. Maqsood Ahmad - Chief Scientist (Head of Accelerator Project) List of CMS participants: Prof. Joseph Incandela, CMS Spokesperson Dr. Austin Ball, CMS Technical Coordinator Mr Andrzej Charkiewicz, CMS Resources Manager Dr. Michael Hoch, CMS Outreach activities, CMS photographer and guide Dr. Achille Petrilli, CMS Team Leader

  10. The effect of a three-tier formulary on antidepressant utilization and expenditures.

    Science.gov (United States)

    Hodgkin, Dominic; Parks Thomas, Cindy; Simoni-Wastila, Linda; Ritter, Grant A; Lee, Sue

    2008-06-01

    Health plans in the United States are struggling to contain rapid growth in their spending on medications. They have responded by implementing multi-tiered formularies, which label certain brand medications 'non-preferred' and require higher patient copayments for those medications. This multi-tier policy relies on patients' willingness to switch medications in response to copayment differentials. The antidepressant class has certain characteristics that may pose problems for implementation of three-tier formularies, such as differences in which medication works for which patient, and high rates of medication discontinuation. To measure the effect of a three-tier formulary on antidepressant utilization and spending, including decomposing spending allocations between patient and plan. We use claims and eligibility files for a large, mature nonprofit managed care organization that started introducing its three-tier formulary on January 1, 2000, with a staggered implementation across employer groups. The sample includes 109,686 individuals who were continuously enrolled members during the study period. We use a pretest-posttest quasi-experimental design that includes a comparison group, comprising members whose employer had not adopted three-tier as of March 1, 2000. This permits some control for potentially confounding changes that could have coincided with three-tier implementation. For the antidepressants that became nonpreferred, prescriptions per enrollee decreased 11% in the three-tier group and increased 5% in the comparison group. The own-copay elasticity of demand for nonpreferred drugs can be approximated as -0.11. Difference-in-differences regression finds that the three-tier formulary slowed the growth in the probability of using antidepressants in the post-period, which was 0.3 percentage points lower than it would have been without three-tier. The three-tier formulary also increased out-of-pocket payments while reducing plan payments and total spending

  11. Performance of the CMS Jets and Missing Transverse Energy Trigger at LHC Run 2

    CERN Document Server

    Nachtman, Jane; Dordevic, Milos; Kaya, Mithat; Kaya, Ozlem; Kirschenmann, Henning; Zhang, Fengwangdong

    2017-01-01

    In preparation for collecting proton-proton collisions from the LHC at a center-of-mass energy of 13 TeV and rate of 40MHz with increasing instantaneous luminosity, the CMS collaboration prepared an array of triggers utilizing jets and missing transverse energy for searches for new physics at the energy frontier as well as for SM precision measurements. The CMS trigger system must be able to sift through the collision events in order to extract events of interest at a rate of 1kHz, applying sophisticated algorithms adapted for fast and effective operation. Particularly important is the calibration of the trigger objects, as corrections to the measured energy may be substantial. Equally important is the development of improved reconstruction algorithms to mitigate negative effects due to high numbers of overlapping proton-proton collisions and increased levels of beam-related effects. Work by the CMS collaboration on upgrading the high-level trigger for jets and missing transverse energy for the upgraded LHC o...

  12. Assessing the Nutritional Quality of Diets of Canadian Adults Using the 2014 Health Canada Surveillance Tool Tier System

    Directory of Open Access Journals (Sweden)

    Mahsa Jessri

    2015-12-01

    Full Text Available The 2014 Health Canada Surveillance Tool (HCST was developed to assess adherence of dietary intakes with Canada’s Food Guide. HCST classifies foods into one of four Tiers based on thresholds for sodium, total fat, saturated fat and sugar, with Tier 1 representing the healthiest and Tier 4 foods being the unhealthiest. This study presents the first application of HCST to assess (a dietary patterns of Canadians; and (b applicability of this tool as a measure of diet quality among 19,912 adult participants of Canadian Community Health Survey 2.2. Findings indicated that even though most of processed meats and potatoes were Tier 4, the majority of reported foods in general were categorized as Tiers 2 and 3 due to the adjustable lenient criteria used in HCST. Moving from the 1st to the 4th quartile of Tier 4 and “other” foods/beverages, there was a significant trend towards increased calories (1876 kcal vs. 2290 kcal and “harmful” nutrients (e.g., sodium as well as decreased “beneficial” nutrients. Compliance with the HCST was not associated with lower body mass index. Future nutrient profiling systems need to incorporate both “positive” and “negative” nutrients, an overall score and a wider range of nutrient thresholds to better capture food product differences.

  13. SUSY searches in early CMS data

    International Nuclear Information System (INIS)

    Tricomi, A

    2008-01-01

    In the first year of data taking at LHC, the CMS experiment expects to collect about 1 fb -1 of data, which make possible the first searches for new phenomena. All such searches require however the measurement of the SM background and a detailed understanding of the detector performance, reconstruction algorithms and triggering. The CMS efforts are hence addressed to designing a realistic analysis plan in preparation to the data taking. In this paper, the CMS perspectives and analysis strategies for Supersymmetry (SUSY) discovery with early data are presented

  14. The CMS Level-1 trigger for LHC Run II

    Science.gov (United States)

    Tapper, A.

    2018-02-01

    During LHC Run II the centre-of-mass energy of pp collisions has increased from 8 TeV up to 13 TeV and the instantaneous luminosity has progressed towards 2 × 1034 cm-2s-1. In order to guarantee a successful and ambitious physics programme under these conditions, the CMS trigger system has been upgraded. The upgraded CMS Level-1 trigger is designed to improve performance at high luminosity and large number of simultaneous inelastic collisions per crossing. The trigger design, implementation and commissioning are summarised, and performance results are described.

  15. 42 CFR 426.517 - CMS' statement regarding new evidence.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false CMS' statement regarding new evidence. 426.517... DETERMINATIONS Review of an NCD § 426.517 CMS' statement regarding new evidence. (a) CMS may review any new... experts; and (5) Presented during any hearing. (b) CMS may submit a statement regarding whether the new...

  16. Assessing the nutritional quality of diets of Canadian children and adolescents using the 2014 Health Canada Surveillance Tool Tier System.

    Science.gov (United States)

    Jessri, Mahsa; Nishi, Stephanie K; L'Abbe, Mary R

    2016-05-10

    Health Canada's Surveillance Tool (HCST) Tier System was developed in 2014 with the aim of assessing the adherence of dietary intakes with Eating Well with Canada's Food Guide (EWCFG). HCST uses a Tier system to categorize all foods into one of four Tiers based on thresholds for total fat, saturated fat, sodium, and sugar, with Tier 4 reflecting the unhealthiest and Tier 1 the healthiest foods. This study presents the first application of the HCST to examine (i) the dietary patterns of Canadian children, and (ii) the applicability and relevance of HCST as a measure of diet quality. Data were from the nationally-representative, cross-sectional Canadian Community Health Survey 2.2. A total of 13,749 participants aged 2-18 years who had complete lifestyle and 24-hour dietary recall data were examined. Dietary patterns of Canadian children and adolescents demonstrated a high prevalence of Tier 4 foods within the sub-groups of processed meats and potatoes. On average, 23-31 % of daily calories were derived from "other" foods and beverages not recommended in EWCFG. However, the majority of food choices fell within the Tier 2 and 3 classifications due to lenient criteria used by the HCST for classifying foods. Adherence to the recommendations presented in the HCST was associated with closer compliance to meeting nutrient Dietary Reference Intake recommendations, however it did not relate to reduced obesity as assessed by body mass index (p > 0.05). EWCFG recommendations are currently not being met by most children and adolescents. Future nutrient profiling systems need to incorporate both positive and negative nutrients and an overall score. In addition, a wider range of nutrient thresholds should be considered for HCST to better capture product differences, prevent categorization of most foods as Tiers 2-3 and provide incentives for product reformulation.

  17. Set of CMS posters in Spanish

    CERN Multimedia

    Lapka, Marzena; Rao, Achintya

    2014-01-01

    14 A0 posters in English to be printed locally or displayed online. Purpose: science fairs, exhibitions, preparatory material for the CMS virtual visits, etc. Themes: CMS detector, sub-detectors, construction, lowering and installation, collaboration and physics. Available in many languages.

  18. Set of CMS posters in Greek

    CERN Multimedia

    Lapka, Marzena; Petrilli, Achille

    2015-01-01

    14 A0 posters in English to be printed locally or displayed online. Purpose: science fairs, exhibitions, preparatory material for the CMS virtual visits, etc. Themes: CMS detector, sub-detectors, construction, lowering and installation, collaboration and physics. Available in many languages.

  19. Set of CMS posters (multiple languages)

    CERN Multimedia

    Lapka, Marzena; Rao, Achintya

    2014-01-01

    14 A0 posters in English to be printed locally or displayed online. Purpose: science fairs, exhibitions, preparatory material for the CMS virtual visits, etc. Themes: CMS detector, sub-detectors, construction, lowering and installation, collaboration and physics. Available in many languages.

  20. Large-scale module production for the CMS silicon strip tracker

    CERN Document Server

    Cattai, A

    2005-01-01

    The Silicon Strip Tracker (SST) for the CMS experiment at LHC consists of 210 m**2 of silicon strip detectors grouped into four distinct sub-systems. We present a brief description of the CMS Tracker, the industrialised detector module production methods and the current status of the SST with reference to some problems encountered at the factories and in the construction centres.