WorldWideScience

Sample records for cms distributed computing

  1. CMS distributed computing workflow experience

    Science.gov (United States)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D.; Prosper, Harrison B.; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao, Junhui; Pin, Arnaud; Schul, Nicolas; De Lentdecker, Gilles; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey; Barge, Derek; Lahiff, Andrew

    2011-12-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  2. CMS distributed computing workflow experience

    International Nuclear Information System (INIS)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D; Prosper, Harrison B; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao Junhui; Pin, Arnaud; Schul, Nicolas; Lentdecker, Gilles De; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey

    2011-01-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  3. CMS Distributed Computing Workflow Experience

    CERN Document Server

    Haas, Jeffrey David

    2010-01-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simul...

  4. Distributed computing grid experiences in CMS

    CERN Document Server

    Andreeva, Julia; Barrass, T; Bonacorsi, D; Bunn, Julian; Capiluppi, P; Corvo, M; Darmenov, N; De Filippis, N; Donno, F; Donvito, G; Eulisse, G; Fanfani, A; Fanzago, F; Filine, A; Grandi, C; Hernández, J M; Innocente, V; Jan, A; Lacaprara, S; Legrand, I; Metson, S; Newbold, D; Newman, H; Pierro, A; Silvestris, L; Steenberg, C; Stockinger, H; Taylor, Lucas; Thomas, M; Tuura, L; Van Lingen, F; Wildish, Tony

    2005-01-01

    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data- taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure ...

  5. CMS Distributed Computing Integration in the LHC sustained operations era

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Bockelman, B; Fisk, I

    2011-01-01

    After many years of preparation the CMS computing system has reached a situation where stability in operations limits the possibility to introduce innovative features. Nevertheless it is the same need of stability and smooth operations that requires the introduction of features that were considered not strategic in the previous phases. Examples are: adequate authorization to control and prioritize the access to storage and computing resources; improved monitoring to investigate problems and identify bottlenecks on the infrastructure; increased automation to reduce the manpower needed for operations; effective process to deploy in production new releases of the software tools. We present the work of the CMS Distributed Computing Integration Activity that is responsible for providing a liaison between the CMS distributed computing infrastructure and the software providers, both internal and external to CMS. In particular we describe the introduction of new middleware features during the last 18 months as well as the requirements to Grid and Cloud software developers for the future.

  6. Improving CMS data transfers among its distributed computing facilities

    International Nuclear Information System (INIS)

    Flix, J; Magini, N; Sartirana, A

    2011-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on their usage, customizing the topologies and improving their setup in order to keep CMS transferring data at the desired levels in a reliable and robust way.

  7. CMS on the GRID: Toward a fully distributed computing architecture

    International Nuclear Information System (INIS)

    Innocente, Vincenzo

    2003-01-01

    The computing systems required to collect, analyse and store the physics data at LHC would need to be distributed and global in scope. CMS is actively involved in several grid-related projects to develop and deploy a fully distributed computing architecture. We present here recent developments of tools for automating job submission and for serving data to remote analysis stations. Plans for further test and deployment of a production grid are also described

  8. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, J; Sartirana, A

    2001-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on thei...

  9. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, Jose

    2010-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on the...

  10. CMS computing on grid

    International Nuclear Information System (INIS)

    Guan Wen; Sun Gongxing

    2007-01-01

    CMS has adopted a distributed system of services which implement CMS application view on top of Grid services. An overview of CMS services will be covered. Emphasis is on CMS data management and workload Management. (authors)

  11. Distributed Analysis in CMS

    CERN Document Server

    Fanfani, Alessandra; Sanches, Jose Afonso; Andreeva, Julia; Bagliesi, Giusepppe; Bauerdick, Lothar; Belforte, Stefano; Bittencourt Sampaio, Patricia; Bloom, Ken; Blumenfeld, Barry; Bonacorsi, Daniele; Brew, Chris; Calloni, Marco; Cesini, Daniele; Cinquilli, Mattia; Codispoti, Giuseppe; D'Hondt, Jorgen; Dong, Liang; Dongiovanni, Danilo; Donvito, Giacinto; Dykstra, David; Edelmann, Erik; Egeland, Ricky; Elmer, Peter; Eulisse, Giulio; Evans, Dave; Fanzago, Federica; Farina, Fabio; Feichtinger, Derek; Fisk, Ian; Flix, Josep; Grandi, Claudio; Guo, Yuyi; Happonen, Kalle; Hernandez, Jose M; Huang, Chih-Hao; Kang, Kejing; Karavakis, Edward; Kasemann, Matthias; Kavka, Carlos; Khan, Akram; Kim, Bockjoo; Klem, Jukka; Koivumaki, Jesper; Kress, Thomas; Kreuzer, Peter; Kurca, Tibor; Kuznetsov, Valentin; Lacaprara, Stefano; Lassila-Perini, Kati; Letts, James; Linden, Tomas; Lueking, Lee; Maes, Joris; Magini, Nicolo; Maier, Gerhild; McBride, Patricia; Metson, Simon; Miccio, Vincenzo; Padhi, Sanjay; Pi, Haifeng; Riahi, Hassen; Riley, Daniel; Rossman, Paul; Saiz, Pablo; Sartirana, Andrea; Sciaba, Andrea; Sekhri, Vijay; Spiga, Daniele; Tuura, Lassi; Vaandering, Eric; Vanelderen, Lukas; Van Mulders, Petra; Vedaee, Aresh; Villella, Ilaria; Wicklund, Eric; Wildish, Tony; Wissing, Christoph; Wurthwein, Frank

    2009-01-01

    The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, distributing them over many computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed sites as the primary location for physics analysis to support a wide community with thousands potential users. This represents an unprecedented experimental challenge in terms of the scale of distributed computing resources and number of user. An overview of the computing architecture, the software tools and the distributed infrastructure is reported. Summaries of the experience in establishing efficient and scalable operations to get prepared for CMS distributed analysis are presented, followed by the user experience in their current analysis activities.

  12. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  13. CMS computing support at JINR

    International Nuclear Information System (INIS)

    Golutvin, I.; Koren'kov, V.; Lavrent'ev, A.; Pose, R.; Tikhonenko, E.

    1998-01-01

    Participation of JINR specialists in the CMS experiment at LHC requires a wide use of computer resources. In the context of JINR activities in the CMS Project hardware and software resources have been provided for full participation of JINR specialists in the CMS experiment; the JINR computer infrastructure was made closer to the CERN one. JINR also provides the informational support for the CMS experiment (web-server http://sunct2.jinr.dubna.su). Plans for further CMS computing support at JINR are stated

  14. CMS computing model evolution

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Colling, D; Fisk, I; Girone, M

    2014-01-01

    The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.

  15. Distributed Grid Experiences in CMS DC04

    CERN Document Server

    Fanfani, A; Grandi, C; Legrand, I; Suresh, S; Campana, S; Donno, F; Jank, W; Sinanis, N; Sciabà, A; García-Abia, P; Hernández, J; Ernst, M; Anzar, A; Fisk, I; Giacchetti, L; Graham, G; Heavey, A; Kaiser, J; Kuropatine, N; Perelmutov, T; Pordes, R; Ratnikova, N; Weigand, J; Wu, Y; Colling, D J; MacEvoy, B; Tallini, H; Wakefield, L; De Filippis, N; Donvito, G; Maggi, G; Bonacorsi, D; Dell'Agnello, L; Martelli, B; Biasotto, M; Fantinel, S; Corvo, M; Fanzago, F; Mazzucato, M; Tuura, L; Martin, T; Letts, J; Bockjoo, K; Prescott, C; Rodríguez, J; Zahn, A; Bradley, D

    2005-01-01

    In March-April 2004 the CMS experiment undertook a Data Challenge (DC04). During the previous 8 months CMS undertook a large simulated event production. The goal of the challenge was to run CMS reconstruction for sustained period at 25Hz in put rate, distribute the data to the CMS Tier-1 centers and analyze them at remote sites. Grid environments developed in Europe by the LHC Computing Grid (LCG) and in the US with Grid2003 were utilized to complete the aspects of the challenge. A description of the experiences, successes and lessons learned from both experiences with grid infrastructure is presented.

  16. CMS computing upgrade and evolution

    CERN Document Server

    Hernandez Calama, Jose

    2013-01-01

    The distributed Grid computing infrastructure has been instrumental in the successful exploitation of the LHC data leading to the discovery of the Higgs boson. The computing system will need to face new challenges from 2015 on when LHC restarts with an anticipated higher detector output rate and event complexity, but with only a limited increase in the computing resources. A more efficient use of the available resources will be mandatory. CMS is improving the data storage, distribution and access as well as the processing efficiency. Remote access to the data through the WAN, dynamic data replication and deletion based on the data access patterns, and separation of disk and tape storage are some of the areas being actively developed. Multi-core processing and scheduling is being pursued in order to make a better use of the multi-core nodes available at the sites. In addition, CMS is exploring new computing techniques, such as Cloud Computing, to get access to opportunistic resources or as a means of using wit...

  17. Towards higher reliability of CMS computing facilities

    International Nuclear Information System (INIS)

    Bagliesi, G; Bloom, K; Brew, C; Flix, J; Kreuzer, P; Sciabà, A

    2012-01-01

    The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The Site Readiness monitoring infrastructure has been instrumental in understanding how the system as a whole was improving towards LHC operations, measuring the reliability of sites when running CMS activities, and providing sites with the information they need to troubleshoot any problem. This contribution reviews the complete automation of the Site Readiness program, with the description of monitoring tools and their inclusion into the Site Status Board (SSB), the performance checks, the use of tools like HammerCloud, and the impact in improving the overall reliability of the Grid from the point of view of the CMS computing system. These results are used by CMS to select good sites to conduct workflows, in order to maximize workflows efficiencies. The performance against these tests seen at the sites during the first years of LHC running is as well reviewed.

  18. Power distribution studies for CMS forward tracker

    International Nuclear Information System (INIS)

    Todri, A.; Turqueti, M.; Rivera, R.; Kwan, S.

    2009-01-01

    The Electronic Systems Engineering Department of the Computing Division at the Fermi National Accelerator Laboratory is carrying out R and D investigations for the upgrade of the power distribution system of the Compact Muon Solenoid (CMS) Pixel Tracker at the Large Hadron Collider (LHC). Among the goals of this effort is that of analyzing the feasibility of alternative powering schemes for the forward tracker, including DC to DC voltage conversion techniques using commercially available and custom switching regulator circuits. Tests of these approaches are performed using the PSI46 pixel readout chip currently in use at the CMS Tracker. Performance measures of the detector electronics will include pixel noise and threshold dispersion results. Issues related to susceptibility to switching noise will be studied and presented. In this paper, we describe the current power distribution network of the CMS Tracker, study the implications of the proposed upgrade with DC-DC converters powering scheme and perform noise susceptibility analysis.

  19. Enabling opportunistic resources for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Hufnagel, Dick [Fermilab

    2015-11-19

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resources — resources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  20. German contributions to the CMS computing infrastructure

    International Nuclear Information System (INIS)

    Scheurer, A

    2010-01-01

    The CMS computing model anticipates various hierarchically linked tier centres to counter the challenges provided by the enormous amounts of data which will be collected by the CMS detector at the Large Hadron Collider, LHC, at CERN. During the past years, various computing exercises were performed to test the readiness of the computing infrastructure, the Grid middleware and the experiment's software for the startup of the LHC which took place in September 2008. In Germany, several tier sites are set up to allow for an efficient and reliable way to simulate possible physics processes as well as to reprocess, analyse and interpret the numerous stored collision events of the experiment. It will be shown that the German computing sites played an important role during the experiment's preparation phase and during data-taking of CMS and, therefore, scientific groups in Germany will be ready to compete for discoveries in this new era of particle physics. This presentation focuses on the German Tier-1 centre GridKa, located at Forschungszentrum Karlsruhe, the German CMS Tier-2 federation DESY/RWTH with installations at the University of Aachen and the research centre DESY. In addition, various local computing resources in Aachen, Hamburg and Karlsruhe are briefly introduced as well. It will be shown that an excellent cooperation between the different German institutions and physicists led to well established computing sites which cover all parts of the CMS computing model. Therefore, the following topics are discussed and the achieved goals and the gained knowledge are depicted: data management and distribution among the different tier sites, Grid-based Monte Carlo production at the Tier-2 as well as Grid-based and locally submitted inhomogeneous user analyses at the Tier-3s. Another important task is to ensure a proper and reliable operation 24 hours a day, especially during the time of data-taking. For this purpose, the meta-monitoring tool 'HappyFace', which was

  1. Towards a global monitoring system for CMS computing operations

    CERN Multimedia

    CERN. Geneva; Bauerdick, Lothar A.T.

    2012-01-01

    The operation of the CMS computing system requires a complex monitoring system to cover all its aspects: central services, databases, the distributed computing infrastructure, production and analysis workflows, the global overview of the CMS computing activities and the related historical information. Several tools are available to provide this information, developed both inside and outside of the collaboration and often used in common with other experiments. Despite the fact that the current monitoring allowed CMS to successfully perform its computing operations, an evolution of the system is clearly required, to adapt to the recent changes in the data and workload management tools and models and to address some shortcomings that make its usage less than optimal. Therefore, a recent and ongoing coordinated effort was started in CMS, aiming at improving the entire monitoring system by identifying its weaknesses and the new requirements from the stakeholders, rationalise and streamline existing components and ...

  2. Performance studies and improvements of CMS distributed data transfers

    International Nuclear Information System (INIS)

    Bonacorsi, D; Flix, J; Kaselis, R; Magini, N; Letts, J; Sartirana, A

    2012-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered distributed infrastructures. CMS experiment relies on File Transfer Services (FTS) for data distribution, a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centers and used by all the computing sites in CMS, subject to established CMS and sites setup policies, including all the virtual organizations making use of the Grid resources at the site, and properly dimensioned to satisfy all the requirements for them. Managing the service efficiently needs good knowledge of the CMS needs for all kind of transfer routes, and the sharing and interference with other VOs using the same FTS transfer managers. This contribution deals with a complete revision of all FTS servers used by CMS, customizing the topologies and improving their setup in order to keep CMS transferring data to the desired levels, as well as performance studies for all kind of transfer routes, including overheads measurements introduced by SRM servers and storage systems, FTS server misconfigurations and identification of congested channels, historical transfer throughputs per stream, file-latency studies,… This information is retrieved directly from the FTS servers through the FTS Monitor webpages and conveniently archived for further analysis. The project provides an interface for all these values, to ease the analysis of the data.

  3. CMS results in the Combined Computing Readiness Challenge CCRC'08

    International Nuclear Information System (INIS)

    Bonacorsi, D.; Bauerdick, L.

    2009-01-01

    During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this worldwide exercise was to check the readiness of the Computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also running at the same time: CCRC augmented the load on computing with additional tests to validate and stress-test all CMS computing workflows at full data taking scale, also extending this to the global WLCG community. CMS exercised most aspects of the CMS computing model, with very comprehensive tests. During May 2008, CMS moved more than 3.6 Petabytes among more than 300 links in the complex Grid topology. CMS demonstrated that is able to safely move data out of CERN to the Tier-1 sites, sustaining more than 600 MB/s as a daily average for more than seven days in a row, with enough headroom and with hourly peaks of up to 1.7 GB/s. CMS ran hundreds of simultaneous jobs at each Tier-1 site, re-reconstructing and skimming hundreds of millions of events. After re-reconstruction the fresh AOD (Analysis Object Data) has to be synchronized between Tier-1 centers: CMS demonstrated that the required inter-Tier-1 transfers are achievable within a few days. CMS also showed that skimmed analysis data sets can be transferred to Tier-2 sites for analysis at sufficient rate, regionally as well as inter-regionally, achieving all goals in about 90% of >200 links. Simultaneously, CMS also ran a large Tier-2 analysis exercise, where realistic analysis jobs were submitted to a large set of Tier-2 sites by a large number of people to produce a chaotic workload across the systems, and with more than 400 analysis users in May. Taken all together, CMS routinely achieved submissions of 100k jobs/day, with peaks up to 200k jobs/day. The achieved results in CCRC'08 - focussing on the distributed

  4. Exploiting Analytics Techniques in CMS Computing Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Bonacorsi, D. [Bologna U.; Kuznetsov, V. [Cornell U.; Magini, N. [Fermilab; Repečka, A. [Vilnius U.; Vaandering, E. [Fermilab

    2017-11-22

    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.

  5. CMS Computing Software and Analysis Challenge 2006

    Energy Technology Data Exchange (ETDEWEB)

    De Filippis, N. [Dipartimento interateneo di Fisica M. Merlin and INFN Bari, Via Amendola 173, 70126 Bari (Italy)

    2007-10-15

    The CMS (Compact Muon Solenoid) collaboration is making a big effort to test the workflow and the dataflow associated with the data handling model. With this purpose the computing, software and analysis Challenge 2006, namely CSA06, started the 15th of September. It was a 50 million event exercise that included all the steps of the analysis chain, like the prompt reconstruction, the data streaming, calibration and alignment iterative executions, the data distribution to regional sites, up to the end-user analysis. Grid tools provided by the LCG project are also experimented to gain access to the data and the resources by providing a user friendly interface to the physicists submitting the production and the analysis jobs. An overview of the status and results of the CSA06 is presented in this work.

  6. CMS Computing Software and Analysis Challenge 2006

    International Nuclear Information System (INIS)

    De Filippis, N.

    2007-01-01

    The CMS (Compact Muon Solenoid) collaboration is making a big effort to test the workflow and the dataflow associated with the data handling model. With this purpose the computing, software and analysis Challenge 2006, namely CSA06, started the 15th of September. It was a 50 million event exercise that included all the steps of the analysis chain, like the prompt reconstruction, the data streaming, calibration and alignment iterative executions, the data distribution to regional sites, up to the end-user analysis. Grid tools provided by the LCG project are also experimented to gain access to the data and the resources by providing a user friendly interface to the physicists submitting the production and the analysis jobs. An overview of the status and results of the CSA06 is presented in this work

  7. Towards a Global Monitoring System for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Bauerdick, L. A.T. [Fermilab; Sciaba, Andrea [CERN

    2012-01-01

    The operation of the CMS computing system requires a complex monitoring system to cover all its aspects: central services, databases, the distributed computing infrastructure, production and analysis workflows, the global overview of the CMS computing activities and the related historical information. Several tools are available to provide this information, developed both inside and outside of the collaboration and often used in common with other experiments. Despite the fact that the current monitoring allowed CMS to successfully perform its computing operations, an evolution of the system is clearly required, to adapt to the recent changes in the data and workload management tools and models and to address some shortcomings that make its usage less than optimal. Therefore, a recent and ongoing coordinated effort was started in CMS, aiming at improving the entire monitoring system by identifying its weaknesses and the new requirements from the stakeholders, rationalise and streamline existing components and drive future software development. This contribution gives a complete overview of the CMS monitoring system and a description of all the recent activities that have been started with the goal of providing a more integrated, modern and functional global monitoring system for computing operations.

  8. Experience building and operating the CMS Tier-1 computing centres

    Science.gov (United States)

    Albert, M.; Bakken, J.; Bonacorsi, D.; Brew, C.; Charlot, C.; Huang, Chih-Hao; Colling, D.; Dumitrescu, C.; Fagan, D.; Fassi, F.; Fisk, I.; Flix, J.; Giacchetti, L.; Gomez-Ceballos, G.; Gowdy, S.; Grandi, C.; Gutsche, O.; Hahn, K.; Holzman, B.; Jackson, J.; Kreuzer, P.; Kuo, C. M.; Mason, D.; Pukhaeva, N.; Qin, G.; Quast, G.; Rossman, P.; Sartirana, A.; Scheurer, A.; Schott, G.; Shih, J.; Tader, P.; Thompson, R.; Tiradani, A.; Trunov, A.

    2010-04-01

    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.

  9. Experience building and operating the CMS Tier-1 computing centres

    International Nuclear Information System (INIS)

    Albert, M; Bakken, J; Huang, Chih-Hao; Dumitrescu, C; Fagan, D; Fisk, I; Giacchetti, L; Gutsche, O; Holzman, B; Bonacorsi, D; Grandi, C; Brew, C; Jackson, J; Charlot, C; Colling, D; Fassi, F; Flix, J; Gomez-Ceballos, G; Hahn, K; Gowdy, S

    2010-01-01

    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.

  10. CMS software and computing for LHC Run 2

    CERN Document Server

    INSPIRE-00067576

    2016-11-09

    The CMS offline software and computing system has successfully met the challenge of LHC Run 2. In this presentation, we will discuss how the entire system was improved in anticipation of increased trigger output rate, increased rate of pileup interactions and the evolution of computing technology. The primary goals behind these changes was to increase the flexibility of computing facilities where ever possible, as to increase our operational efficiency, and to decrease the computing resources needed to accomplish the primary offline computing workflows. These changes have resulted in a new approach to distributed computing in CMS for Run 2 and for the future as the LHC luminosity should continue to increase. We will discuss changes and plans to our data federation, which was one of the key changes towards a more flexible computing model for Run 2. Our software framework and algorithms also underwent significant changes. We will summarize the our experience with a new multi-threaded framework as deployed on ou...

  11. Alert Messaging in the CMS Distributed Workflow System

    International Nuclear Information System (INIS)

    Maxa, Zdenek

    2012-01-01

    WMAgent is the core component of the CMS workload management system. One of the features of this job managing platform is a configurable messaging system aimed at generating, distributing and processing alerts: short messages describing a given alert-worthy information or pathological condition. Apart from the framework's sub-components running within the WMAgent instances, there is a stand-alone application collecting alerts from all WMAgent instances running across the CMS distributed computing environment. The alert framework has a versatile design that allows for receiving alert messages also from other CMS production applications, such as PhEDEx data transfer manager. We present implementation details of the system, including its Python implementation using ZeroMQ, CouchDB message storage and future visions as well as operational experiences. Inter-operation with monitoring platforms such as Dashboard or Lemon is described.

  12. Computing challenges of the CMS experiment

    International Nuclear Information System (INIS)

    Krammer, N.; Liko, D.

    2017-01-01

    The success of the LHC experiments is due to the magnificent performance of the detector systems and the excellent operating computing systems. The CMS offline software and computing system is successfully fulfilling the LHC Run 2 requirements. For the increased data rate of future LHC operation, together with high pileup interactions, improvements of the usage of the current computing facilities and new technologies became necessary. Especially for the challenge of the future HL-LHC a more flexible and sophisticated computing model is needed. In this presentation, I will discuss the current computing system used in the LHC Run 2 and future computing facilities for the HL-LHC runs using flexible computing technologies like commercial and academic computing clouds. The cloud resources are highly virtualized and can be deployed for a variety of computing tasks providing the capacities for the increasing needs of large scale scientific computing.

  13. CMS Monte Carlo production in the WLCG computing grid

    International Nuclear Information System (INIS)

    Hernandez, J M; Kreuzer, P; Hof, C; Khomitch, A; Mohapatra, A; Filippis, N D; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Weirdt, S D; Maes, J; Mulders, P v; Villella, I; Wakefield, S; Guan, W; Fanfani, A; Evans, D; Flossdorf, A

    2008-01-01

    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day

  14. CMS distributed data analysis with CRAB3

    Science.gov (United States)

    Mascheroni, M.; Balcas, J.; Belforte, S.; Bockelman, B. P.; Hernandez, J. M.; Ciangottini, D.; Konstantinov, P. B.; Silva, J. M. D.; Ali, M. A. B. M.; Melo, A. M.; Riahi, H.; Tanasijczuk, A. J.; Yusli, M. N. B.; Wolf, M.; Woodard, A. E.; Vaandering, E.

    2015-12-01

    The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day. CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks and submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. The new system improves the robustness, scalability and sustainability of the service. Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.

  15. CMS Computing Operations During Run1

    CERN Document Server

    Gutsche, Oliver

    2013-01-01

    During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this presentation we will discuss the operational experience from the first run. We will present the workflows and data flows that were executed, we will discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. In this presentation we will also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.

  16. CMS computing operations during run 1

    CERN Document Server

    Adelman, J; Artieda, J; Bagliese, G; Ballestero, D; Bansal, S; Bauerdick, L; Behrenhof, W; Belforte, S; Bloom, K; Blumenfeld, B; Blyweert, S; Bonacorsi, D; Brew, C; Contreras, L; Cristofori, A; Cury, S; da Silva Gomes, D; Dolores Saiz Santos, M; Dost, J; Dykstra, D; Fajardo Hernandez, E; Fanzango, F; Fisk, I; Flix, J; Georges, A; Gi ffels, M; Gomez-Ceballos, G; Gowdy, S; Gutsche, O; Holzman, B; Janssen, X; Kaselis, R; Kcira, D; Kim, B; Klein, D; Klute, M; Kress, T; Kreuzer, P; Lahi , A; Larson, K; Letts, J; Levin, A; Linacre, J; Linares, J; Liu, S; Luyckx, S; Maes, M; Magini, N; Malta, A; Marra Da Silva, J; Mccartin, J; McCrea, A; Mohapatra, A; Molina, J; Mortensen, T; Padhi, S; Paus, C; Piperov, S; Ralph; Sartirana, A; Sciaba, A; S ligoi, I; Spinoso, V; Tadel, M; Traldi, S; Wissing, C; Wuerthwein, F; Yang, M; Zielinski, M; Zvada, M

    2014-01-01

    During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.

  17. The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC

    Energy Technology Data Exchange (ETDEWEB)

    Kuznetsov, Valentin [Cornell U.; Fischer, Nils Leif [Heidelberg U.; Guo, Yuyi [Fermilab

    2018-03-19

    The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregate $\\mathcal{O}$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.

  18. Monitoring data transfer latency in CMS computing operations

    CERN Document Server

    Bonacorsi, D; Magini, N; Sartirana, A; Taze, M; Wildish, T

    2015-01-01

    During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.For this reason, in 2012 the CMS transfer management system, PhEDEx, was instrumented with a monitoring system to measure file transfer latencies, and to predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies while the transfer is still in progress, and monitor the long-term performance of the transfer infrastructure to plan the data placement strategy.Based on the data collected for one year with the latency monitoring system, we present a study on the different fact...

  19. CMS Connect

    Science.gov (United States)

    Balcas, J.; Bockelman, B.; Gardner, R., Jr.; Hurtado Anampa, K.; Jayatilaka, B.; Aftab Khan, F.; Lannon, K.; Larson, K.; Letts, J.; Marra Da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from opportunistic resources for these kind of tasks and even though production and event processing analysis workflows are already managed by existing tools, there is still a lack of support to submit final stage condor-like analysis jobs familiar to Tier-3 or local Computing Facilities users into these distributed resources in an integrated (with other CMS services) and friendly way. CMS Connect is a set of computing tools and services designed to augment existing services in the CMS Physics community focusing on these kind of condor analysis jobs. It is based on the CI-Connect platform developed by the Open Science Grid and uses the CMS GlideInWMS infrastructure to transparently plug CMS global grid resources into a virtual pool accessed via a single submission machine. This paper describes the specific developments and deployment of CMS Connect beyond the CI-Connect platform in order to integrate the service with CMS specific needs, including specific Site submission, accounting of jobs and automated reporting to standard CMS monitoring resources in an effortless way to their users.

  20. CMS Software and Computing Ready for Run 2

    CERN Document Server

    Bloom, Kenneth

    2015-01-01

    In Run 1 of the Large Hadron Collider, software and computing was a strategic strength of the Compact Muon Solenoid experiment. The timely processing of data and simulation samples and the excellent performance of the reconstruction algorithms played an important role in the preparation of the full suite of searches used for the observation of the Higgs boson in 2012. In Run 2, the LHC will run at higher intensities and CMS will record data at a higher trigger rate. These new running conditions will provide new challenges for the software and computing systems. Over the two years of Long Shutdown 1, CMS has built upon the successes of Run 1 to improve the software and computing to meet these challenges. In this presentation we will describe the new features in software and computing that will once again put CMS in a position of physics leadership.

  1. 76 FR 14669 - Privacy Act of 1974; CMS Computer Match No. 2011-02; HHS Computer Match No. 1007

    Science.gov (United States)

    2011-03-17

    ... 1974; CMS Computer Match No. 2011-02; HHS Computer Match No. 1007 AGENCY: Department of Health and Human Services (HHS), Centers for Medicare & Medicaid Services (CMS). ACTION: Notice of computer... notice establishes a computer matching agreement between CMS and the Department of Defense (DoD). We have...

  2. 78 FR 50419 - Privacy Act of 1974; CMS Computer Match No. 2013-10; HHS Computer Match No. 1310

    Science.gov (United States)

    2013-08-19

    ... 1974; CMS Computer Match No. 2013-10; HHS Computer Match No. 1310 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS). ACTION: Notice of Computer Matching... notice announces the establishment of a CMP that CMS plans to conduct with the Department of Homeland...

  3. 78 FR 39730 - Privacy Act of 1974; CMS Computer Match No. 2013-11; HHS Computer Match No. 1302

    Science.gov (United States)

    2013-07-02

    ... 1974; CMS Computer Match No. 2013-11; HHS Computer Match No. 1302 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS). ACTION: Notice of Computer Matching... notice announces the establishment of a CMP that CMS intends to conduct with State-based Administering...

  4. 78 FR 73195 - Privacy Act of 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching...

    Science.gov (United States)

    2013-12-05

    ... 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching Program Match No. 1312 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS... Privacy Act of 1974 (5 U.S.C. 552a), as amended, this notice announces the renewal of a CMP that CMS plans...

  5. 75 FR 30839 - Privacy Act of 1974; CMS Computer Match No. 2010-03, HHS Computer Match No. 1003, SSA Computer...

    Science.gov (United States)

    2010-06-02

    ... 1974; CMS Computer Match No. 2010-03, HHS Computer Match No. 1003, SSA Computer Match No. 1048, IRS... Services (CMS). ACTION: Notice of renewal of an existing computer matching program (CMP) that has an...'' section below for comment period. DATES: Effective Dates: CMS filed a report of the Computer Matching...

  6. Charge Distribution Dependency on Gap Thickness of CMS Endcap RPC

    CERN Document Server

    Park, Sung K.; Lee, Kyongsei

    2016-01-01

    We report a systematic study of charge distribution dependency of CMS Resistive Plate Chamber (RPC) on gap thickness. Prototypes of double-gap RPCs with six different gap thickness ranging from from 1.0 to 2.0 mm in 0.2-mm steps have been built with 2-mm-thick phenolic high-pressure-laminated plates. The efficiencies of the six gaps are measured as a function of the effective high voltages. We report that the strength of the electric fields of the gap is decreased as the gap thickness is increased. The distributions of charges in six gaps are measured. The space charge effect is seen in the charge distribution at the higher voltages. The logistic function is used to fit the charge distribution data. Smaller charges can be produced within smaller gas gap. But the digitization threshold should be also lowered to utilize these smaller charges.

  7. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    International Nuclear Information System (INIS)

    Evans, D; Fisk, I; Holzman, B; Pordes, R; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely 'on-demand' as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the 'base-line' needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  8. Using Amazon's Elastic Compute Cloud to scale CMS' compute hardware dynamically.

    CERN Document Server

    Melo, Andrew Malone

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud-computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely on-demand as limits and caps on usage are imposed. Our trial workflows allow us t...

  9. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  10. Charge distribution dependency on gap thickness of CMS endcap RPC

    CERN Document Server

    Park, Sung Keun

    2016-01-01

    We present a systematic study of charge distribution dependency of CMS Resistive Plate Chamber (RPC) on gap thickness.Prototypes of double-gap with five different gap thickness from 1.8mm to 1.0mm in 0.2mm steps have been built with 2mm thick phenolic high-pressure-laminated (HPL) plates. The charges of cosmic-muon signals induced on the detector strips are measured as a function of time using two four-channel 400-MHz fresh ADCs. In addition, the arrival time of the muons and the strip cluster sizes are measured by digitizing the signal using a 32-channel voltage-mode front-end-electronics and a 400-MHz 64-channel multi-hit TDC. The gain and the input impedance of the front-end-electronics were 200mV/mV and 20 Ohm, respectively.

  11. Distributed multiscale computing

    NARCIS (Netherlands)

    Borgdorff, J.

    2014-01-01

    Multiscale models combine knowledge, data, and hypotheses from different scales. Simulating a multiscale model often requires extensive computation. This thesis evaluates distributing these computations, an approach termed distributed multiscale computing (DMC). First, the process of multiscale

  12. CMS offline web tools

    International Nuclear Information System (INIS)

    Metson, S; Newbold, D; Belforte, S; Kavka, C; Bockelman, B; Dziedziniewicz, K; Egeland, R; Elmer, P; Eulisse, G; Tuura, L; Evans, D; Fanfani, A; Feichtinger, D; Kuznetsov, V; Lingen, F van; Wakefield, S

    2008-01-01

    We describe a relatively new effort within CMS to converge on a set of web based tools, using state of the art industry techniques, to engage with the CMS offline computing system. CMS collaborators require tools to monitor various components of the computing system and interact with the system itself. The current state of the various CMS web tools is described along side current planned developments. The CMS collaboration comprises of nearly 3000 people from all over the world. As well as its collaborators, its computing resources are spread all over globe and are accessed via the LHC grid to run analysis, large scale production and data transfer tasks. Due to the distributed nature of collaborators effective provision of collaborative tools is essential to maximise physics exploitation of the CMS experiment, especially when the size of the CMS data set is considered. CMS has chosen to provide such tools over the world wide web as a top level service, enabling all members of the collaboration to interact with the various offline computing components. Traditionally web interfaces have been added in HEP experiments as an afterthought. In the CMS offline we have decided to put web interfaces, and the development of a common CMS web framework, on an equal footing with the rest of the offline development. Tools exist within CMS to transfer and catalogue data (PhEDEx and DBS/DLS), run Monte Carlo production (ProdAgent) and submit analysis (CRAB). Effective human interfaces to these systems are required for users with different agendas and practical knowledge of the systems to effectively use the CMS computing system. The CMS web tools project aims to provide a consistent interface to all these tools

  13. CMS offline web tools

    Energy Technology Data Exchange (ETDEWEB)

    Metson, S; Newbold, D [H.H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Belforte, S; Kavka, C [INFN, Sezione di Trieste (Italy); Bockelman, B [University of Nebraska Lincoln, Lincoln, NE (United States); Dziedziniewicz, K [CERN, Geneva (Switzerland); Egeland, R [University of Minnesota Twin Cities, Minneapolis, MN (United States); Elmer, P [Princeton (United States); Eulisse, G; Tuura, L [Northeastern University, Boston, MA (United States); Evans, D [Fermilab MS234, Batavia, IL (United States); Fanfani, A [Universita degli Studi di Bologna (Italy); Feichtinger, D [PSI, Villigen (Switzerland); Kuznetsov, V [Cornell University, Ithaca, NY (United States); Lingen, F van [California Institute of Technology, Pasedena, CA (United States); Wakefield, S [Blackett Laboratory, Imperial College, London (United Kingdom)

    2008-07-15

    We describe a relatively new effort within CMS to converge on a set of web based tools, using state of the art industry techniques, to engage with the CMS offline computing system. CMS collaborators require tools to monitor various components of the computing system and interact with the system itself. The current state of the various CMS web tools is described along side current planned developments. The CMS collaboration comprises of nearly 3000 people from all over the world. As well as its collaborators, its computing resources are spread all over globe and are accessed via the LHC grid to run analysis, large scale production and data transfer tasks. Due to the distributed nature of collaborators effective provision of collaborative tools is essential to maximise physics exploitation of the CMS experiment, especially when the size of the CMS data set is considered. CMS has chosen to provide such tools over the world wide web as a top level service, enabling all members of the collaboration to interact with the various offline computing components. Traditionally web interfaces have been added in HEP experiments as an afterthought. In the CMS offline we have decided to put web interfaces, and the development of a common CMS web framework, on an equal footing with the rest of the offline development. Tools exist within CMS to transfer and catalogue data (PhEDEx and DBS/DLS), run Monte Carlo production (ProdAgent) and submit analysis (CRAB). Effective human interfaces to these systems are required for users with different agendas and practical knowledge of the systems to effectively use the CMS computing system. The CMS web tools project aims to provide a consistent interface to all these tools.

  14. CMS MANANGEMENT MEETINGS

    CERN Multimedia

    Management Board Agendas and minutes of meetings of the Management Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223 Collaboration Board Agendas and minutes of meetings of the Collaboration Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174 LHCC: Feedback from the CMS Referees, LHCC 97 February 25, 2009. The CMS LHCC referees met with representatives of CMS on 17-2-09, to review progress since the last November minireview. The main topics included shutdown construction, maintenance and repairs; status of the preshower detector; commissioning and physics analysis results from cosmic ray running and CSA08; preparations for physics, off line analysis, computing, and data distribution. TOTEM management and the TOTEM referees then joined us for a joint session to examine the readiness of the TOTEM detector. Detector construction, maintenance, and repairs. The referees congratulate CMS Management and the Detector Groups for the...

  15. Grid computing for LHC and methods for W boson mass measurement at CMS

    International Nuclear Information System (INIS)

    Jung, Christopher

    2007-01-01

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W → μν; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  16. Grid computing for LHC and methods for W boson mass measurement at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Christopher

    2007-12-14

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W {yields} {mu}{nu}; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  17. Optimization of Italian CMS computing centers via MIUR funded research projects

    International Nuclear Information System (INIS)

    Boccali, T; Mazzoni, E; Donvito, G; Pompili, A; Ricca, G Della; Talamo, I; Argiro, S; Grandi, C; Bonacorsi, D; Lista, L; Fabozzi, F; Barone, L M; Santocchia, A; Riahi, H; Tricomi, A; Sgaravatto, M; Maron, G

    2014-01-01

    In 2012, 14 Italian Institutions participating LHC Experiments (10 in CMS) have won a grant from the Italian Ministry of Research (MIUR), to optimize Analysis activities and in general the Tier2/Tier3 infrastructure. A large range of activities is actively carried on: they cover data distribution over WAN, dynamic provisioning for both scheduled and interactive processing, design and development of tools for distributed data analysis, and tests on the porting of CMS software stack to new highly performing / low power architectures.

  18. CMS 2006 - CMS France days

    International Nuclear Information System (INIS)

    Huss, D.; Dobrzynski, L.; Virdee, J.; Boudoule, G.; Fontaine, J.C.; Faure, J.L.; Paganini, P.; Mathez, H.; Gross, L.; Charlot, C.; Trunov, A.; Patois, Y.; Busson, P.; Maire, M.; Berthon, U.; Todorov, T.; Beaudette, F.; Sirois, Y.; Baffioni, S.; Beauceron, S.; Delmeire, E.; Agram, J.L.; Goerlach, U.; Mangeol, D.; Salerno, R.; Bloch, D.; Lassila-Perini, K.; Blaha, J.; Drobychev, G.; Gras, P.; Hagenauer, M.; Denegri, D.; Lounis, A.; Faccio, F.; Lecoq, J.

    2006-01-01

    These CMS talks give the opportunity for all the teams working on the CMS (Compact Muon Spectrometer) project to present the status of their works and to exchange ideas. 5 sessions have been organized: 1) CMS status and perspectives, 2) contributions of the different laboratories, 3) software and computation, 4) physics with CMS (particularly the search for the Higgs boson), and 5) electronic needs. This document gathers the slides of the presentations

  19. CRAB3: Establishing a new generation of services for distributed analysis at CMS

    International Nuclear Information System (INIS)

    Cinquilli, M; Spiga, D; Konstantinov, P; Mascheroni, M; Grandi, C; Hernàndez, J M; Riahi, H; Vaandering, E

    2012-01-01

    In CMS Computing the highest priorities for analysis tools are the improvement of the end users’ ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as data and simulation processing. This strategy foresees that all workload tools (TierO, Tier1, production, analysis) share a common core with long term maintainability as well as the standardization of the operator interfaces. The re-engineered analysis workload manager, called CRAB3, makes use of newer technologies, such as RESTFul based web services and NoSQL Databases, aiming to increase the scalability and reliability of the system. As opposed to CRAB2, in CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services serving the user tasks. The new architecture of CRAB substantially changes the deployment model and operations activities. In this paper we present the implementation of CRAB3, emphasizing how the new architecture improves the workflow automation and simplifies maintainability. In particular, we will highlight the impact of the new design on daily operations.

  20. CRAB3: Establishing a new generation of services for distributed analysis at CMS

    Science.gov (United States)

    Cinquilli, M.; Spiga, D.; Grandi, C.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Riahi, H.; Vaandering, E.

    2012-12-01

    In CMS Computing the highest priorities for analysis tools are the improvement of the end users’ ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as data and simulation processing. This strategy foresees that all workload tools (TierO, Tier1, production, analysis) share a common core with long term maintainability as well as the standardization of the operator interfaces. The re-engineered analysis workload manager, called CRAB3, makes use of newer technologies, such as RESTFul based web services and NoSQL Databases, aiming to increase the scalability and reliability of the system. As opposed to CRAB2, in CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services serving the user tasks. The new architecture of CRAB substantially changes the deployment model and operations activities. In this paper we present the implementation of CRAB3, emphasizing how the new architecture improves the workflow automation and simplifies maintainability. In particular, we will highlight the impact of the new design on daily operations.

  1. CRAB3: Establishing a new generation of services for distributed analysis at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Cinquilli, M. [CERN; Spiga, D. [CERN; Grandi, C. [INFN, Bologna; Hernandez, J. M. [Madrid, CIEMAT; Konstantinov, P. [CERN; Mascheroni, M. [CERN; Riahi, H. [INFN, Perugia; Vaandering, E. [Fermilab

    2012-01-01

    In CMS Computing the highest priorities for analysis tools are the improvement of the end users ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as data and simulation processing. This strategy foresees that all workload tools (TierO, Tier1, production, analysis) share a common core with long term maintainability as well as the standardization of the operator interfaces. The re-engineered analysis workload manager, called CRAB3, makes use of newer technologies, such as RESTFul based web services and NoSQL Databases, aiming to increase the scalability and reliability of the system. As opposed to CRAB2, in CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services serving the user tasks. The new architecture of CRAB substantially changes the deployment model and operations activities. In this paper we present the implementation of CRAB3, emphasizing how the new architecture improves the workflow automation and simplifies maintainability. In particular, we will highlight the impact of the new design on daily operations.

  2. 78 FR 48169 - Privacy Act of 1974; CMS Computer Match No. 2013-02; HHS Computer Match No. 1306; DoD-DMDC Match...

    Science.gov (United States)

    2013-08-07

    ... 1974; CMS Computer Match No. 2013-02; HHS Computer Match No. 1306; DoD-DMDC Match No. 12 AGENCY: Department of Health and Human Services (HHS), Centers for Medicare & Medicaid Services (CMS). ACTION: Notice... of 1974, as amended, this notice establishes a CMP that CMS plans to conduct with the Department of...

  3. 78 FR 42080 - Privacy Act of 1974; CMS Computer Match No. 2013-07; HHS Computer Match No. 1303; DoD-DMDC Match...

    Science.gov (United States)

    2013-07-15

    ... 1974; CMS Computer Match No. 2013-07; HHS Computer Match No. 1303; DoD-DMDC Match No. 18 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS). ACTION... Act of 1974, as amended, this notice announces the establishment of a CMP that CMS plans to conduct...

  4. Intelligent distributed computing

    CERN Document Server

    Thampi, Sabu

    2015-01-01

    This book contains a selection of refereed and revised papers of the Intelligent Distributed Computing Track originally presented at the third International Symposium on Intelligent Informatics (ISI-2014), September 24-27, 2014, Delhi, India.  The papers selected for this Track cover several Distributed Computing and related topics including Peer-to-Peer Networks, Cloud Computing, Mobile Clouds, Wireless Sensor Networks, and their applications.

  5. CMS MANAGEMENT MEETINGS

    CERN Multimedia

    Management Board Agendas and minutes of meetings of the Management Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223 Collaboration Board Agendas and minutes of meetings of the Collaboration Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174 LHCC: Feedback from the CMS Referees, LHCC 97 February 25, 2009. The CMS LHCC referees met with representatives of CMS on 17-2-09, to review progress since the last November minireview. The main topics included  shutdown construction, maintenance and repairs;  status of the preshower detector; commissioning and physics analysis results from cosmic ray running and CSA08;   preparations for physics, off line analysis, computing, and data distribution. TOTEM management and the TOTEM referees then joined us for a joint session to examine the readiness of the TOTEM detector. Detector construction, maintenance, and repairs. The referees congratulate C...

  6. CMS MANAGEMENT MEETINGS

    CERN Multimedia

    Jim Virdee

    Management Board Agendas and minutes of meetings of the Management Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223 Collaboration Board Agendas and minutes of meetings of the Collaboration Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174 LHCC: Feedback from the CMS Referees, LHCC 97 February 25, 2009. The CMS LHCC referees met with representatives of CMS on 17-2-09, to review progress since the last November minireview. The main topics included  shutdown construction, maintenance and repairs;  status of the preshower detector; commissioning and physics analysis results from cosmic ray running and CSA08;   preparations for physics, off line analysis, computing, and data distribution. TOTEM management and the TOTEM referees then joined us for a joint session to examine the readiness of the TOTEM detector. Detector construction, maintenance, and repairs. The referees congratula...

  7. A data Grid prototype for distributed data production in CMS

    International Nuclear Information System (INIS)

    Hafeez, Mehnaz; Samar, Asad; Stockinger, Heinz

    2001-01-01

    The CMS experiment at CERN is setting up a Grid infrastructure required to fulfill the needs imposed by Terabyte scale productions for the next few years. The goal is to automate the production and at the same time allow the users to interact with the system, if required, to make decisions which would optimize performance. We present the architecture, design and functionality of our first working Objectivity file replication prototype. The middle-ware of choice is the Globus toolkit that provides promising functionality. Our results prove the ability of the Globus toolkit to be used as an underlying technology for a world-wide Data Grid. The required data management functionality includes high speed file transfers, secure access to remote files, selection and synchronization of replicas and managing the meta information. The whole system is expected to be flexible enough to incorporate site specific policies. The data management granularity is the file rather than the object level. The first prototype is currently in use for the High Level Trigger (HLT) production (autumn 2000). Owing to these efforts, CMS is one of the pioneers to use the Data Grid functionality in a running production system. The project can be viewed as an evaluator of different strategies, a test for the capabilities of middle-ware tools and a provider of basic Grid functionalities

  8. A data grid prototype for distributed data production in CMS

    CERN Document Server

    Hafeez, M; Stockinger, H E

    2001-01-01

    The CMS experiment at CERN is setting up a grid infrastructure required to fulfil the needs imposed by Terabyte scale productions for the next few years. The goal is to automate the production and at the same time allow the users to interact with the system, if required, to make decisions which would optimise performance. We present the architecture, design and functionality of our first working objectivity file replication prototype. The middle-ware of choice is the Globus toolkit that provides promising functionality. Our results prove the ability of the Globus toolkit to be used as an underlying technology for a world-wide Data Grid. The required data management functionality includes high speed file transfers, secure access to remote files, selection and synchronisation of replicas and managing the meta information. The whole system is expected to be flexible enough to incorporate site specific policies. The data management granularity is the file rather than the object level. The first prototype is curre...

  9. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  10. CMS distributed analysis infrastructure and operations: experience with the first LHC data

    International Nuclear Information System (INIS)

    Vaandering, E W

    2011-01-01

    The CMS distributed analysis infrastructure represents a heterogeneous pool of resources distributed across several continents. The resources are harnessed using glite and glidein-based work load management systems (WMS). We provide the operational experience of the analysis workflows using CRAB-based servers interfaced with the underlying WMS. The automatized interaction of the server with the WMS provides a successful analysis workflow. We present the operational experience as well as methods used in CMS to analyze the LHC data. The interaction with CMS Run-registry for Run and luminosity block selections via CRAB is discussed. The variations of different workflows during the LHC data-taking period and the lessons drawn from this experience are also outlined.

  11. ATLAS Distributed Computing

    CERN Document Server

    Schovancova, J; The ATLAS collaboration

    2011-01-01

    The poster details the different aspects of the ATLAS Distributed Computing experience after the first year of LHC data taking. We describe the performance of the ATLAS distributed computing system and the lessons learned during the 2010 run, pointing out parts of the system which were in a good shape, and also spotting areas which required improvements. Improvements ranged from hardware upgrade on the ATLAS Tier-0 computing pools to improve data distribution rates, tuning of FTS channels between CERN and Tier-1s, and studying data access patterns for Grid analysis to improve the global processing rate. We show recent software development driven by operational needs with emphasis on data management and job execution in the ATLAS production system.

  12. Coping with distributed computing

    International Nuclear Information System (INIS)

    Cormell, L.

    1992-09-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by providing some examples of the approaches taken at various HEP institutions. In addition, a brief review of commercial directions or products for distributed computing and management will be given

  13. Distributed error and alarm processing in the CMS data acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, G.; et al.

    2012-01-01

    The error and alarm system for the data acquisition of the Compact Muon Solenoid (CMS) at CERN was successfully used for the physics runs at Large Hadron Collider (LHC) during first three years of activities. Error and alarm processing entails the notification, collection, storing and visualization of all exceptional conditions occurring in the highly distributed CMS online system using a uniform scheme. Alerts and reports are shown on-line by web application facilities that map them to graphical models of the system as defined by the user. A persistency service keeps a history of all exceptions occurred, allowing subsequent retrieval of user defined time windows of events for later playback or analysis. This paper describes the architecture and the technologies used and deals with operational aspects during the first years of LHC operation. In particular we focus on performance, stability, and integration with the CMS sub-detectors.

  14. Challenging data and workload management in CMS Computing with network-aware systems

    Science.gov (United States)

    D, Bonacorsi; T, Wildish

    2014-06-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of Intelligent Network Services, including also bandwidth on demand concepts. In this paper, we will review the work done in CMS on this, and the next steps.

  15. Challenging data and workload management in CMS Computing with network-aware systems

    International Nuclear Information System (INIS)

    Bonacorsi D; Wildish T

    2014-01-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of Intelligent Network Services, including also bandwidth on demand concepts. In this paper, we will review the work done in CMS on this, and the next steps.

  16. DIRAC distributed computing services

    International Nuclear Information System (INIS)

    Tsaregorodtsev, A

    2014-01-01

    DIRAC Project provides a general-purpose framework for building distributed computing systems. It is used now in several HEP and astrophysics experiments as well as for user communities in other scientific domains. There is a large interest from smaller user communities to have a simple tool like DIRAC for accessing grid and other types of distributed computing resources. However, small experiments cannot afford to install and maintain dedicated services. Therefore, several grid infrastructure projects are providing DIRAC services for their respective user communities. These services are used for user tutorials as well as to help porting the applications to the grid for a practical day-to-day work. The services are giving access typically to several grid infrastructures as well as to standalone computing clusters accessible by the target user communities. In the paper we will present the experience of running DIRAC services provided by the France-Grilles NGI and other national grid infrastructure projects.

  17. A comparison between physicians and computer algorithms for form CMS-2728 data reporting.

    Science.gov (United States)

    Malas, Mohammed Said; Wish, Jay; Moorthi, Ranjani; Grannis, Shaun; Dexter, Paul; Duke, Jon; Moe, Sharon

    2017-01-01

    CMS-2728 form (Medical Evidence Report) assesses 23 comorbidities chosen to reflect poor outcomes and increased mortality risk. Previous studies questioned the validity of physician reporting on forms CMS-2728. We hypothesize that reporting of comorbidities by computer algorithms identifies more comorbidities than physician completion, and, therefore, is more reflective of underlying disease burden. We collected data from CMS-2728 forms for all 296 patients who had incident ESRD diagnosis and received chronic dialysis from 2005 through 2014 at Indiana University outpatient dialysis centers. We analyzed patients' data from electronic medical records systems that collated information from multiple health care sources. Previously utilized algorithms or natural language processing was used to extract data on 10 comorbidities for a period of up to 10 years prior to ESRD incidence. These algorithms incorporate billing codes, prescriptions, and other relevant elements. We compared the presence or unchecked status of these comorbidities on the forms to the presence or absence according to the algorithms. Computer algorithms had higher reporting of comorbidities compared to forms completion by physicians. This remained true when decreasing data span to one year and using only a single health center source. The algorithms determination was well accepted by a physician panel. Importantly, algorithms use significantly increased the expected deaths and lowered the standardized mortality ratios. Using computer algorithms showed superior identification of comorbidities for form CMS-2728 and altered standardized mortality ratios. Adapting similar algorithms in available EMR systems may offer more thorough evaluation of comorbidities and improve quality reporting. © 2016 International Society for Hemodialysis.

  18. CMS data and workflow management system

    CERN Document Server

    Fanfani, A; Bacchi, W; Codispoti, G; De Filippis, N; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Silvestris, L; Calzolari, F; Sarkar, S; Spiga, D; Cinquili, M; Lacaprara, S; Biasotto, M; Farina, F; Merlo, M; Belforte, S; Kavka, C; Sala, L; Harvey, J; Hufnagel, D; Fanzago, F; Corvo, M; Magini, N; Rehn, J; Toteva, Z; Feichtinger, D; Tuura, L; Eulisse, G; Bockelman, B; Lundstedt, C; Egeland, R; Evans, D; Mason, D; Gutsche, O; Sexton-Kennedy, L; Dagenhart, D W; Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V; Fisk, I; McBride, P; Bauerdick, L; Bakken, J; Rossman, P; Wicklund, E; Wu, Y; Jones, C; Kuznetsov, V; Riley, D; Dolgert, A; van Lingen, F; Narsky, I; Paus, C; Klute, M; Gomez-Ceballos, G; Piedra-Gomez, J; Miller, M; Mohapatra, A; Lazaridis, C; Bradley, D; Elmer, P; Wildish, T; Wuerthwein, F; Letts, J; Bourilkov, D; Kim, B; Smith, P; Hernandez, J M; Caballero, J; Delgado, A; Flix, J; Cabrillo-Bartolome, I; Kasemann, M; Flossdorf, A; Stadie, H; Kreuzer, P; Khomitch, A; Hof, C; Zeidler, C; Kalini, S; Trunov, A; Saout, C; Felzmann, U; Metson, S; Newbold, D; Geddes, N; Brew, C; Jackson, J; Wakefield, S; De Weirdt, S; Adler, V; Maes, J; Van Mulders, P; Villella, I; Hammad, G; Pukhaeva, N; Kurca, T; Semneniouk, I; Guan, W; Lajas, J A; Teodoro, D; Gregores, E; Baquero, M; Shehzad, A; Kadastik, M; Kodolova, O; Chao, Y; Ming Kuo, C; Filippidis, C; Walzel, G; Han, D; Kalinowski, A; Giro de Almeida, N M; Panyam, N

    2008-01-01

    CMS expects to manage many tens of peta bytes of data to be distributed over several computing centers around the world. The CMS distributed computing and analysis model is designed to serve, process and archive the large number of events that will be generated when the CMS detector starts taking data. The underlying concepts and the overall architecture of the CMS data and workflow management system will be presented. In addition the experience in using the system for MC production, initial detector commissioning activities and data analysis will be summarized.

  19. Elastic Extension of a CMS Computing Centre Resources on External Clouds

    Science.gov (United States)

    Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.

    2016-10-01

    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.

  20. Challenging data and workload management in CMS Computing with network-aware systems

    CERN Document Server

    Wildish, Anthony

    2014-01-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of "Intelligent Network Services", including also bandwidth on demand concepts. In this paper, we will ...

  1. Challenging Data Management in CMS Computing with Network-aware Systems

    CERN Document Server

    Bonacorsi, Daniele

    2013-01-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of �?��??Intelligent Network Services�?��?�, including also bandwidt...

  2. Opportunistic resource usage in CMS

    International Nuclear Information System (INIS)

    Kreuzer, Peter; Hufnagel, Dirk; Dykstra, D; Gutsche, O; Tadel, M; Sfiligoi, I; Letts, J; Wuerthwein, F; McCrea, A; Bockelman, B; Fajardo, E; Linares, L; Wagner, R; Konstantinov, P; Blumenfeld, B; Bradley, D

    2014-01-01

    CMS is using a tiered setup of dedicated computing resources provided by sites distributed over the world and organized in WLCG. These sites pledge resources to CMS and are preparing them especially for CMS to run the experiment's applications. But there are more resources available opportunistically both on the GRID and in local university and research clusters which can be used for CMS applications. We will present CMS' strategy to use opportunistic resources and prepare them dynamically to run CMS applications. CMS is able to run its applications on resources that can be reached through the GRID, through EC2 compliant cloud interfaces. Even resources that can be used through ssh login nodes can be harnessed. All of these usage modes are integrated transparently into the GlideIn WMS submission infrastructure, which is the basis of CMS' opportunistic resource usage strategy. Technologies like Parrot to mount the software distribution via CVMFS and xrootd for access to data and simulation samples via the WAN are used and will be described. We will summarize the experience with opportunistic resource usage and give an outlook for the restart of LHC data taking in 2015.

  3. The Need for an R&D and Upgrade Program for CMS Software and Computing

    CERN Document Server

    Elmer, Peter; Stenson, Kevin; Wittich, Peter

    2013-01-01

    Over the next ten years, the physics reach of the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) will be greatly extended through increases in the instantaneous luminosity of the accelerator and large increases in the amount of collected data. Due to changes in the way Moore's Law computing performance gains have been realized in the past decade, an aggressive program of R&D is needed to ensure that the computing capability of CMS will be up to the task of collecting and analyzing this data.

  4. Angular distributions of the quenched energy flow from dijets with different radius parameters in CMS

    Energy Technology Data Exchange (ETDEWEB)

    McGinn, Christopher F.

    2016-12-15

    The flow of the quenched energy in imbalanced dijet events has been previously studied by transverse vector sum of charged particles with the CMS detector, namely the missing p{sub T} measurement. The results have led to new theoretical insights to order to explain the wide angle radiation. The missing p{sub T} technique has been improved so that it allows the study of angular distribution of the energy flow with respect to the dijet axis. The measurements are performed using different distance parameters R with the anti-k{sub T} clustering algorithm, which provide information about how the angular distribution of the quenched energy depends on the jet width.

  5. CMS analysis operations

    International Nuclear Information System (INIS)

    Andreeva, J; Maier, G; Spiga, D; Calloni, M; Colling, D; Fanzago, F; D'Hondt, J; Maes, J; Van Mulders, P; Villella, I; Klem, J; Letts, J; Padhi, S; Sarkar, S

    2010-01-01

    During normal data taking CMS expects to support potentially as many as 2000 analysis users. Since the beginning of 2008 there have been more than 800 individuals who submitted a remote analysis job to the CMS computing infrastructure. The bulk of these users will be supported at the over 40 CMS Tier-2 centres. Supporting a globally distributed community of users on a globally distributed set of computing clusters is a task that requires reconsidering the normal methods of user support for Analysis Operations. In 2008 CMS formed an Analysis Support Task Force in preparation for large-scale physics analysis activities. The charge of the task force was to evaluate the available support tools, the user support techniques, and the direct feedback of users with the goal of improving the success rate and user experience when utilizing the distributed computing environment. The task force determined the tools needed to assess and reduce the number of non-zero exit code applications submitted through the grid interfaces and worked with the CMS experiment dashboard developers to obtain the necessary information to quickly and proactively identify issues with user jobs and data sets hosted at various sites. Results of the analysis group surveys were compiled. Reference platforms for testing and debugging problems were established in various geographic regions. The task force also assessed the resources needed to make the transition to a permanent Analysis Operations task. In this presentation the results of the task force will be discussed as well as the CMS Analysis Operations plans for the start of data taking.

  6. Debugging data transfers in CMS

    International Nuclear Information System (INIS)

    Bagliesi, G; Belforte, S; Bloom, K; Bockelman, B; Bonacorsi, D; Fisk, I; Flix, J; Hernandez, J; D'Hondt, J; Maes, J; Kadastik, M; Klem, J; Kodolova, O; Kuo, C-M; Letts, J; Magini, N; Metson, S; Piedra, J; Pukhaeva, N; Tuura, L

    2010-01-01

    The CMS experiment at CERN is preparing for LHC data taking in several computing preparation activities. In early 2007 a traffic load generator infrastructure for distributed data transfer tests was designed and deployed to equip the WLCG tiers which support the CMS virtual organization with a means for debugging, load-testing and commissioning data transfer routes among CMS computing centres. The LoadTest is based upon PhEDEx as a reliable, scalable data set replication system. The Debugging Data Transfers (DDT) task force was created to coordinate the debugging of the data transfer links. The task force aimed to commission most crucial transfer routes among CMS tiers by designing and enforcing a clear procedure to debug problematic links. Such procedure aimed to move a link from a debugging phase in a separate and independent environment to a production environment when a set of agreed conditions are achieved for that link. The goal was to deliver one by one working transfer routes to the CMS data operations team. The preparation, activities and experience of the DDT task force within the CMS experiment are discussed. Common technical problems and challenges encountered during the lifetime of the taskforce in debugging data transfer links in CMS are explained and summarized.

  7. CMS AWARDS

    CERN Multimedia

    Steven Lowette

    Working under great time pressure towards a common goal in gradual steps can sometimes cause us to forget to take a step back, and celebrate what marvels have been achieved. A general need was felt within CMS to expand the recognition for our young scientists that made outstanding, well recognized and creative contributions to CMS, which served to significantly advance the performance of CMS as a complete and powerful experiment. Therefore, the Collaboration Board endorsed in March 2009 a proposal from the CB Chair and Advisory Group to award each year the newly created "CMS Achievement Award" to fourteen graduate students and postdocs that made exceptional contributions to the Tracker, ECAL, HCAL and Muon subdetectors as well as the TriDAS project, the Commissioning of CMS and the Offline Software and Computing projects. It was also agreed that there was a need to go back in time, and retroactively attribute awards for the years 2007 and 2008 when CMS went from a bare cavern to a detect...

  8. The commissioning of CMS sites: Improving the site reliability

    International Nuclear Information System (INIS)

    Belforte, S; Fisk, I; Flix, J; Hernandez, J M; Klem, J; Letts, J; Magini, N; Saiz, P; Sciaba, A

    2010-01-01

    The computing system of the CMS experiment works using distributed resources from more than 60 computing centres worldwide. These centres, located in Europe, America and Asia are interconnected by the Worldwide LHC Computing Grid. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established a procedure to extensively test all relevant aspects of a Grid site, such as the ability to efficiently use their network to transfer data, the functionality of all the site services relevant for CMS and the capability to sustain the various CMS computing workflows at the required scale. This contribution describes in detail the procedure to rate CMS sites depending on their performance, including the complete automation of the program, the description of monitoring tools, and its impact in improving the overall reliability of the Grid from the point of view of the CMS computing system.

  9. Distributed computing for global health

    CERN Multimedia

    CERN. Geneva; Schwede, Torsten; Moore, Celia; Smith, Thomas E; Williams, Brian; Grey, François

    2005-01-01

    Distributed computing harnesses the power of thousands of computers within organisations or over the Internet. In order to tackle global health problems, several groups of researchers have begun to use this approach to exceed by far the computing power of a single lab. This event illustrates how companies, research institutes and the general public are contributing their computing power to these efforts, and what impact this may have on a range of world health issues. Grids for neglected diseases Vincent Breton, CNRS/EGEE This talk introduces the topic of distributed computing, explaining the similarities and differences between Grid computing, volunteer computing and supercomputing, and outlines the potential of Grid computing for tackling neglected diseases where there is little economic incentive for private R&D efforts. Recent results on malaria drug design using the Grid infrastructure of the EU-funded EGEE project, which is coordinated by CERN and involves 70 partners in Europe, the US and Russi...

  10. Measurement of the dijet angular distributions and search for quark compositeness with the CMS experiment

    International Nuclear Information System (INIS)

    Hinzmann, Andreas Dominik

    2011-01-01

    The Large Hadron Collider (LHC) at the Conseil Europeen pour la Recherche Nucleaire (CERN) allows to study the interactions of quarks and gluons in a yet unexplored energy regime. In 2010, the LHC delivered an integrated luminosity of more than 36 pb -1 of proton-proton collisions at a center-of-mass energy of √(s)=7 TeV. In these proton-proton collisions, the interactions of the constituent quarks and gluons produced a considerable amount of jets of particles with transverse momenta above 1 TeV. Well suited for the study of these jet processes is the Compact Muon Solenoid (CMS) experiment situated at the LHC point 5 as it can measure jets with the necessary energy and angular resolutions over a large range of transverse momentum (∝30 GeV T dijet = e vertical stroke y 1 -y 2 vertical stroke , where y 1 and y 2 are the rapidities of the two jets, y ≡ (1)/(2)ln [(E+p z )/(E-p z )], and p z is the projection of the jet momentum along the beam axis. The choice of the variable χ dijet is motivated by the fact that the normalized differential cross section (1)/(σ) (dσ)/(dχ dijet ) (the dijet angular distribution) is flat in this variable for Rutherford scattering, characteristic for spin-1 particle exchange. In contrast to QCD which predicts a dijet angular distribution similar to Rutherford scattering, new physics, such as quark compositeness, that might have a more isotropic dijet angular distribution would produce an excess at low values of χ dijet . Since the shapes of the dijet angular distributions for the qg →qg, qq ' →qq ' and gg →gg scattering processes are similar, the QCD prediction does not strongly depend on the parton distribution functions (PDFs) which describe the momentum distribution of the partons inside the protons. Due to the normalization, the dijet angular distribution has a reduced sensitivity to several predominant experimental uncertainties (e.g. the jet energy scale and luminosity uncertainties). The dijet angular distribution

  11. CMS 2006 - CMS France days; CMS 2006 les journees CMS FRANCE

    Energy Technology Data Exchange (ETDEWEB)

    Huss, D.; Dobrzynski, L.; Virdee, J.; Boudoule, G.; Fontaine, J.C.; Faure, J.L.; Paganini, P.; Mathez, H.; Gross, L.; Charlot, C.; Trunov, A.; Patois, Y.; Busson, P.; Maire, M.; Berthon, U.; Todorov, T.; Beaudette, F.; Sirois, Y.; Baffioni, S.; Beauceron, S.; Delmeire, E.; Agram, J.L.; Goerlach, U.; Mangeol, D.; Salerno, R.; Bloch, D.; Lassila-Perini, K.; Blaha, J.; Drobychev, G.; Gras, P.; Hagenauer, M.; Denegri, D.; Lounis, A.; Faccio, F.; Lecoq, J

    2006-07-01

    These CMS talks give the opportunity for all the teams working on the CMS (Compact Muon Spectrometer) project to present the status of their works and to exchange ideas. 5 sessions have been organized: 1) CMS status and perspectives, 2) contributions of the different laboratories, 3) software and computation, 4) physics with CMS (particularly the search for the Higgs boson), and 5) electronic needs. This document gathers the slides of the presentations.

  12. Measurement of the dijet angular distributions and search for quark compositeness with the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Hinzmann, Andreas Dominik

    2011-10-07

    {sub 2}} {sup vertical} {sup stroke}, where y{sub 1} and y{sub 2} are the rapidities of the two jets, y {identical_to} (1)/(2)ln [(E+p{sub z})/(E-p{sub z})], and p{sub z} is the projection of the jet momentum along the beam axis. The choice of the variable {chi}{sub dijet} is motivated by the fact that the normalized differential cross section (1)/({sigma}) (d{sigma})/(d{chi}{sub dijet}) (the dijet angular distribution) is flat in this variable for Rutherford scattering, characteristic for spin-1 particle exchange. In contrast to QCD which predicts a dijet angular distribution similar to Rutherford scattering, new physics, such as quark compositeness, that might have a more isotropic dijet angular distribution would produce an excess at low values of {chi}{sub dijet}. Since the shapes of the dijet angular distributions for the qg {yields}qg, qq{sup '} {yields}qq{sup '} and gg {yields}gg scattering processes are similar, the QCD prediction does not strongly depend on the parton distribution functions (PDFs) which describe the momentum distribution of the partons inside the protons. Due to the normalization, the dijet angular distribution has a reduced sensitivity to several predominant experimental uncertainties (e.g. the jet energy scale and luminosity uncertainties). The dijet angular distribution is therefore well suited to test the predictions of QCD and to search for signals of new physics, in particular for signs of quark compositeness. In the following a measurement of the dijet angular distributions and a search for quark compositeness with the CMS experiment is presented. (orig.)

  13. Measurement of the dijet angular distributions and search for quark compositeness with the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Hinzmann, Andreas Dominik

    2011-10-07

    {sub 2}} {sup vertical} {sup stroke}, where y{sub 1} and y{sub 2} are the rapidities of the two jets, y {identical_to} (1)/(2)ln [(E+p{sub z})/(E-p{sub z})], and p{sub z} is the projection of the jet momentum along the beam axis. The choice of the variable {chi}{sub dijet} is motivated by the fact that the normalized differential cross section (1)/({sigma}) (d{sigma})/(d{chi}{sub dijet}) (the dijet angular distribution) is flat in this variable for Rutherford scattering, characteristic for spin-1 particle exchange. In contrast to QCD which predicts a dijet angular distribution similar to Rutherford scattering, new physics, such as quark compositeness, that might have a more isotropic dijet angular distribution would produce an excess at low values of {chi}{sub dijet}. Since the shapes of the dijet angular distributions for the qg {yields}qg, qq{sup '} {yields}qq{sup '} and gg {yields}gg scattering processes are similar, the QCD prediction does not strongly depend on the parton distribution functions (PDFs) which describe the momentum distribution of the partons inside the protons. Due to the normalization, the dijet angular distribution has a reduced sensitivity to several predominant experimental uncertainties (e.g. the jet energy scale and luminosity uncertainties). The dijet angular distribution is therefore well suited to test the predictions of QCD and to search for signals of new physics, in particular for signs of quark compositeness. In the following a measurement of the dijet angular distributions and a search for quark compositeness with the CMS experiment is presented. (orig.)

  14. CMS Experiment Data Processing at RDMS CMS Tier 2 Centers

    CERN Document Server

    Gavrilov, V; Korenkov, V; Tikhonenko, E; Shmatov, S; Zhiltsov, V; Ilyin, V; Kodolova, O; Levchuk, L

    2012-01-01

    Russia and Dubna Member States (RDMS) CMS collaboration was founded in the year 1994 [1]. The RDMS CMS takes an active part in the Compact Muon Solenoid (CMS) Collaboration [2] at the Large Hadron Collider (LHC) [3] at CERN [4]. RDMS CMS Collaboration joins more than twenty institutes from Russia and Joint Institute for Nuclear Research (JINR) member states. RDMS scientists, engineers and technicians were actively participating in design, construction and commissioning of all CMS sub-detectors in forward regions. RDMS CMS physics program has been developed taking into account the essential role of these sub-detectors for the corresponding physical channels. RDMS scientists made large contribution for preparation of study QCD, Electroweak, Exotics, Heavy Ion and other physics at CMS. The overview of RDMS CMS physics tasks and RDMS CMS computing activities are presented in [5-11]. RDMS CMS computing support should satisfy the LHC data processing and analysis requirements at the running phase of the CMS experime...

  15. CMS overview

    CERN Document Server

    AUTHOR|(CDS)2071615

    2016-01-01

    Most recent CMS data related to the high-density QCD are presented for pp and PbPb collisions at 2.76 TeV and pPb collisions at 5.02 TeV. The PbPb collision is essential to understand collective behavior and the final-state effects for the detailed characteristics of hot, dense partonic matter, whereas the pPb collision provides the critical information on the initial-state effects including the modification of the parton distribution function in cold nuclei. This paper highlights some of recent heavy-ion related results from CMS.

  16. Exercising CMS dataflows and workflows in computing challenges at the SpanishTier-1 and Tier-2 sites

    Energy Technology Data Exchange (ETDEWEB)

    Caballero, J; Colino, N; Peris, A D; G-Abia, P; Hernandez, J M; R-Calonge, F J [CIEMAT, Madrid (Spain); Cabrillo, I; Caballero, I G; Marco, R; Matorras, F [IFCA, Santander (Spain); Flix, J; Merino, G [PIC, Barcelona (Spain)], E-mail: jose.hernandez@ciemat.es

    2008-07-15

    An overview of the data transfer, processing and analysis operations conducted at the Spanish Tier-1 (PIC, Barcelona) and Tier-2 (CIEMAT-Madrid and IFCA-Santander federation) centres during the past CMS CSA06 Computing, Software and Analysis challenge and in preparation for CSA07 is present0008.

  17. Exercising CMS dataflows and workflows in computing challenges at the SpanishTier-1 and Tier-2 sites

    International Nuclear Information System (INIS)

    Caballero, J; Colino, N; Peris, A D; G-Abia, P; Hernandez, J M; R-Calonge, F J; Cabrillo, I; Caballero, I G; Marco, R; Matorras, F; Flix, J; Merino, G

    2008-01-01

    An overview of the data transfer, processing and analysis operations conducted at the Spanish Tier-1 (PIC, Barcelona) and Tier-2 (CIEMAT-Madrid and IFCA-Santander federation) centres during the past CMS CSA06 Computing, Software and Analysis challenge and in preparation for CSA07 is presented

  18. Distributed GPU Computing in GIScience

    Science.gov (United States)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE

  19. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  20. Distributed computing for macromolecular crystallography.

    Science.gov (United States)

    Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Winn, Martyn; Ballard, Charles

    2018-02-01

    Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community.

  1. NSC KIPT Linux cluster for computing within the CMS physics program

    International Nuclear Information System (INIS)

    Levchuk, L.G.; Sorokin, P.V.; Soroka, D.V.

    2002-01-01

    The architecture of the NSC KIPT specialized Linux cluster constructed for carrying out work on CMS physics simulations and data processing is described. The configuration of the portable batch system (PBS) on the cluster is outlined. Capabilities of the cluster in its current configuration to perform CMS physics simulations are pointed out

  2. Distributed-memory matrix computations

    DEFF Research Database (Denmark)

    Balle, Susanne Mølleskov

    1995-01-01

    The main goal of this project is to investigate, develop, and implement algorithms for numerical linear algebra on parallel computers in order to acquire expertise in methods for parallel computations. An important motivation for analyzaing and investigating the potential for parallelism in these......The main goal of this project is to investigate, develop, and implement algorithms for numerical linear algebra on parallel computers in order to acquire expertise in methods for parallel computations. An important motivation for analyzaing and investigating the potential for parallelism...... in these algorithms is that many scientific applications rely heavily on the performance of the involved dense linear algebra building blocks. Even though we consider the distributed-memory as well as the shared-memory programming paradigm, the major part of the thesis is dedicated to distributed-memory architectures....... We emphasize distributed-memory massively parallel computers - such as the Connection Machines model CM-200 and model CM-5/CM-5E - available to us at UNI-C and at Thinking Machines Corporation. The CM-200 was at the time this project started one of the few existing massively parallel computers...

  3. Distributed computing at the SSCL

    International Nuclear Information System (INIS)

    Cormell, L.; White, R.

    1993-05-01

    The rapid increase in the availability of high performance, cost- effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no linger provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory. In addition, a brief review of the future directions of commercial products for distributed computing and management will be given

  4. Distributed computing at the SSCL

    International Nuclear Information System (INIS)

    Cormell, L.R.; White, R.C.

    1994-01-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory (SSCL). In addition, a brief review of the future directions of commercial products for distributed computing and management will be given

  5. AsyncStageOut: Distributed user data management for CMS Analysis

    Science.gov (United States)

    Riahi, H.; Wildish, T.; Ciangottini, D.; Hernández, J. M.; Andreeva, J.; Balcas, J.; Karavakis, E.; Mascheroni, M.; Tanasijczuk, A. J.; Vaandering, E. W.

    2015-12-01

    AsyncStageOut (ASO) is a new component of the distributed data analysis system of CMS, CRAB, designed for managing users' data. It addresses a major weakness of the previous model, namely that mass storage of output data was part of the job execution resulting in inefficient use of job slots and an unacceptable failure rate at the end of the jobs. ASO foresees the management of up to 400k files per day of various sizes, spread worldwide across more than 60 sites. It must handle up to 1000 individual users per month, and work with minimal delay. This creates challenging requirements for system scalability, performance and monitoring. ASO uses FTS to schedule and execute the transfers between the storage elements of the source and destination sites. It has evolved from a limited prototype to a highly adaptable service, which manages and monitors the user file placement and bookkeeping. To ensure system scalability and data monitoring, it employs new technologies such as a NoSQL database and re-uses existing components of PhEDEx and the FTS Dashboard. We present the asynchronous stage-out strategy and the architecture of the solution we implemented to deal with those issues and challenges. The deployment model for the high availability and scalability of the service is discussed. The performance of the system during the commissioning and the first phase of production are also shown, along with results from simulations designed to explore the limits of scalability.

  6. AsyncStageOut: Distributed User Data Management for CMS Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Riahi, H. [CERN; Wildish, T. [Princeton U.; Ciangottini, D. [Perugia U.; Hernández, J. M. [Madrid, CIEMAT; Andreeva, J. [CERN; Balcas, J. [Vilnius U.; Karavakis, E. [CERN; Mascheroni, M. [INFN, Milan Bicocca; Tanasijczuk, A. J. [UC, San Diego; Vaandering, E. W. [Fermilab

    2015-12-23

    AsyncStageOut (ASO) is a new component of the distributed data analysis system of CMS, CRAB, designed for managing users' data. It addresses a major weakness of the previous model, namely that mass storage of output data was part of the job execution resulting in inefficient use of job slots and an unacceptable failure rate at the end of the jobs. ASO foresees the management of up to 400k files per day of various sizes, spread worldwide across more than 60 sites. It must handle up to 1000 individual users per month, and work with minimal delay. This creates challenging requirements for system scalability, performance and monitoring. ASO uses FTS to schedule and execute the transfers between the storage elements of the source and destination sites. It has evolved from a limited prototype to a highly adaptable service, which manages and monitors the user file placement and bookkeeping. To ensure system scalability and data monitoring, it employs new technologies such as a NoSQL database and re-uses existing components of PhEDEx and the FTS Dashboard. We present the asynchronous stage-out strategy and the architecture of the solution we implemented to deal with those issues and challenges. The deployment model for the high availability and scalability of the service is discussed. The performance of the system during the commissioning and the first phase of production are also shown, along with results from simulations designed to explore the limits of scalability.

  7. CMS Centre at CERN

    CERN Multimedia

    2007-01-01

    A new "CMS Centre" is being established on the CERN Meyrin site by the CMS collaboration. It will be a focal point for communications, where physicists will work together on data quality monitoring, detector calibration, offline analysis of physics events, and CMS computing operations. Construction of the CMS Centre begins in the historic Proton Synchrotron (PS) control room. The historic Proton Synchrotron (PS) control room, Opened by Niels Bohr in 1960, will be reused by CMS to built its control centre. TThe LHC@FNAL Centre, in operation at Fermilab in the US, will work very closely with the CMS Centre, as well as the CERN Control Centre. (Photo Fermilab)The historic Proton Synchrotron (PS) control room is about to start a new life. Opened by Niels Bohr in 1960, the room will be reused by CMS to built its control centre. When finished, it will resemble the CERN Contro...

  8. Heavy Flavour distributions from CMS with 2017 data at 13 TeV

    CERN Document Server

    CMS Collaboration

    2018-01-01

    We report plots on heavy flavor from the data collected in 2017 by CMS at LHC at 13 TeV. B meson performance plots in two different periods, characterized by different instantaneous luminosity are included.

  9. Debugging Data Transfers in CMS

    CERN Document Server

    Bagliesi, G; Bloom, K; Bockelman, B; Bonacorsi, D; Fisk, I; Flix, J; Hernandez, J; D'Hondt, J; Kadastik, M; Klem, J; Kodolova, O; Kuo, C M; Letts, J; Maes, J; Magini, N; Metson, S; Piedra, J; Pukhaeva, N; Tuura, L; Sonajalg, S; Wu, Y; Van Mulders, P; Villella, I; Wurthwein, F

    2010-01-01

    The CMS experiment at CERN is preparing for LHC data taking in several computing preparation activities. In early 2007 a traffic load generator infrastructure for distributed data transfer tests called the LoadTest was designed and deployed to equip the WLCG sites that support CMS with a means for debugging, load-testing and commissioning data transfer routes among CMS computing centres. The LoadTest is based upon PhEDEx as a reliable, scalable data set replication system. The Debugging Data Transfers (DDT) task force was created to coordinate the debugging of the data transfer links. The task force aimed to commission most crucial transfer routes among CMS sites by designing and enforcing a clear procedure to debug problematic links. Such procedure aimed to move a link from a debugging phase in a separate and independent environment to a production environment when a set of agreed conditions are achieved for that link. The goal was to deliver one by one working transfer routes to the CMS data operations team...

  10. CMS Factsheet

    CERN Multimedia

    Lapka, Marzena; Rao, Achintya

    2016-01-01

    CMS Factsheets: containing facts about the CMS collaboration and detector. Printed copies of the English version are available from the CMS Secretariat. Responsible for translations: English only - E.Gibney (updated 2015)

  11. A Population WB-PBPK Model of Colistin and its Prodrug CMS in Pigs: Focus on the Renal Distribution and Excretion.

    Science.gov (United States)

    Viel, Alexis; Henri, Jérôme; Bouchène, Salim; Laroche, Julian; Rolland, Jean-Guy; Manceau, Jacqueline; Laurentie, Michel; Couet, William; Grégoire, Nicolas

    2018-03-12

    The objective was the development of a whole-body physiologically-based pharmacokinetic (WB-PBPK) model for colistin, and its prodrug colistimethate sodium (CMS), in pigs to explore their tissue distribution, especially in kidneys. Plasma and tissue concentrations of CMS and colistin were measured after systemic administrations of different dosing regimens of CMS in pigs. The WB-PBPK model was developed based on these data according to a non-linear mixed effect approach and using NONMEM software. A detailed sub-model was implemented for kidneys to handle the complex disposition of CMS and colistin within this organ. The WB-PBPK model well captured the kinetic profiles of CMS and colistin in plasma. In kidneys, an accumulation and slow elimination of colistin were observed and well described by the model. Kidneys seemed to have a major role in the elimination processes, through tubular secretion of CMS and intracellular degradation of colistin. Lastly, to illustrate the usefulness of the PBPK model, an estimation of the withdrawal periods after veterinary use of CMS in pigs was made. The WB-PBPK model gives an insight into the renal distribution and elimination of CMS and colistin in pigs; it may be further developed to explore the colistin induced-nephrotoxicity in humans.

  12. Overlapping clusters for distributed computation.

    Energy Technology Data Exchange (ETDEWEB)

    Mirrokni, Vahab (Google Research, New York, NY); Andersen, Reid (Microsoft Corporation, Redmond, WA); Gleich, David F.

    2010-11-01

    Scalable, distributed algorithms must address communication problems. We investigate overlapping clusters, or vertex partitions that intersect, for graph computations. This setup stores more of the graph than required but then affords the ease of implementation of vertex partitioned algorithms. Our hope is that this technique allows us to reduce communication in a computation on a distributed graph. The motivation above draws on recent work in communication avoiding algorithms. Mohiyuddin et al. (SC09) design a matrix-powers kernel that gives rise to an overlapping partition. Fritzsche et al. (CSC2009) develop an overlapping clustering for a Schwarz method. Both techniques extend an initial partitioning with overlap. Our procedure generates overlap directly. Indeed, Schwarz methods are commonly used to capitalize on overlap. Elsewhere, overlapping communities (Ahn et al, Nature 2009; Mishra et al. WAW2007) are now a popular model of structure in social networks. These have long been studied in statistics (Cole and Wishart, CompJ 1970). We present two types of results: (i) an estimated swapping probability {rho}{infinity}; and (ii) the communication volume of a parallel PageRank solution (link-following {alpha} = 0.85) using an additive Schwarz method. The volume ratio is the amount of extra storage for the overlap (2 means we store the graph twice). Below, as the ratio increases, the swapping probability and PageRank communication volume decreases.

  13. Health and performance monitoring of the online computer cluster of CMS

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, G.; et al.

    2012-01-01

    The CMS experiment at the LHC features over 2'500 devices that need constant monitoring in order to ensure proper data taking. The monitoring solution has been migrated from Nagios to Icinga, with several useful plugins. The motivations behind the migration and the selection of the plugins are discussed.

  14. Towards distributed multiscale computing for the VPH

    NARCIS (Netherlands)

    Hoekstra, A.G.; Coveney, P.

    2010-01-01

    Multiscale modeling is fundamental to the Virtual Physiological Human (VPH) initiative. Most detailed three-dimensional multiscale models lead to prohibitive computational demands. As a possible solution we present MAPPER, a computational science infrastructure for Distributed Multiscale Computing

  15. CMS Collaboration

    International Nuclear Information System (INIS)

    Faridah Mohammad Idris; Wan Ahmad Tajuddin Wan Abdullah; Zainol Abidin Ibrahim

    2013-01-01

    Full-text: CMS Collaboration is an international scientific collaboration located at European Organization for Nuclear Research (CERN), Switzerland, dedicated in carried out research on experimental particle physics. Consisting of 179 institutions from 41 countries from all around the word, CMS Collaboration host a general purpose detector for example the Compact Muon Solenoid (CMS) for members in CMS Collaboration to conduct experiment from the collision of two proton beams accelerated to a speed of 8 TeV in the LHC ring. In this paper, we described how the CMS detector is used by the scientist in CMS Collaboration to reconstruct the most basic building of matter. (author)

  16. Operational experience with CMS Tier-2 sites

    International Nuclear Information System (INIS)

    Gonzalez Caballero, I

    2010-01-01

    In the CMS computing model, more than one third of the computing resources are located at Tier-2 sites, which are distributed across the countries in the collaboration. These sites are the primary platform for user analyses; they host datasets that are created at Tier-1 sites, and users from all CMS institutes submit analysis jobs that run on those data through grid interfaces. They are also the primary resource for the production of large simulation samples for general use in the experiment. As a result, Tier-2 sites have an interesting mix of organized experiment-controlled activities and chaotic user-controlled activities. CMS currently operates about 40 Tier-2 sites in 22 countries, making the sites a far-flung computational and social network. We describe our operational experience with the sites, touching on our achievements, the lessons learned, and the challenges for the future.

  17. CMS Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Center for Strategic Planning produces an annual CMS Statistics reference booklet that provides a quick reference for summary information about health...

  18. 78 FR 49525 - Privacy Act of 1974; CMS Computer Match No. 2013-06; HHS Computer Match No. 1308

    Science.gov (United States)

    2013-08-14

    ... certain protections for individuals applying for and receiving Federal benefits. Section 7201 of the.... Verify match findings before reducing, suspending, terminating, or denying an individual's benefits or... credit (APTC) and cost sharing reductions (CSR). The data will be used by CMS in its capacity as a...

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  20. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted.   CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat a...

  1. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the natu...

  2. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the natur...

  3. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ Management- CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. Management - CB - MB - FB Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2007 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the nature of employment and ...

  4. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ Management- CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. Management - CB - MB - FB Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2007 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the nature of em¬pl...

  5. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the na...

  6. Computer Graphics Simulations of Sampling Distributions.

    Science.gov (United States)

    Gordon, Florence S.; Gordon, Sheldon P.

    1989-01-01

    Describes the use of computer graphics simulations to enhance student understanding of sampling distributions that arise in introductory statistics. Highlights include the distribution of sample proportions, the distribution of the difference of sample means, the distribution of the difference of sample proportions, and the distribution of sample…

  7. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the ICMS Web site. The following items can be found on: http://cms.cern.ch/iCMS Management – CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. Management – CB – MB – FB Agendas and minutes are accessible to CMS members through Indico. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2008 Annual Reviews are posted in Indico. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral student upon completion of their theses.  Therefore it is requested that Ph.D students inform the CMS Secretariat about the nature of employment and name of their first employer. The Notes, Conference Reports and Theses published si...

  8. Support system for ATLAS distributed computing operations

    CERN Document Server

    Kishimoto, Tomoe; The ATLAS collaboration

    2018-01-01

    The ATLAS distributed computing system has allowed the experiment to successfully meet the challenges of LHC Run 2. In order for distributed computing to operate smoothly and efficiently, several support teams are organized in the ATLAS experiment. The ADCoS (ATLAS Distributed Computing Operation Shifts) is a dedicated group of shifters who follow and report failing jobs, failing data transfers between sites, degradation of ATLAS central computing services, and more. The DAST (Distributed Analysis Support Team) provides user support to resolve issues related to running distributed analysis on the grid. The CRC (Computing Run Coordinator) maintains a global view of the day-to-day operations. In this presentation, the status and operational experience of the support system for ATLAS distributed computing in LHC Run 2 will be reported. This report also includes operations experience from the grid site point of view, and an analysis of the errors that create the biggest waste of wallclock time. The report of oper...

  9. A distributed computer system for digitising machines

    International Nuclear Information System (INIS)

    Bairstow, R.; Barlow, J.; Waters, M.; Watson, J.

    1977-07-01

    This paper describes a Distributed Computing System, based on micro computers, for the monitoring and control of digitising tables used by the Rutherford Laboratory Bubble Chamber Research Group in the measurement of bubble chamber photographs. (author)

  10. Distributed Processing in Cloud Computing

    OpenAIRE

    Mavridis, Ilias; Karatza, Eleni

    2016-01-01

    Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016) Timisoara, Romania. February 8-11, 2016. Cloud computing offers a wide range of resources and services through the Internet that can been used for various purposes. The rapid growth of cloud computing has exempted many companies and institutions from the burden of maintaining expensive hardware and software infrastructure. With characteristics like high scalability, availability ...

  11. Distributed computing and nuclear reactor analysis

    International Nuclear Information System (INIS)

    Brown, F.B.; Derstine, K.L.; Blomquist, R.N.

    1994-01-01

    Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations

  12. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  13. Fel simulations using distributed computing

    NARCIS (Netherlands)

    Einstein, J.; Biedron, S.G.; Freund, H.P.; Milton, S.V.; Van Der Slot, P. J M; Bernabeu, G.

    2016-01-01

    While simulation tools are available and have been used regularly for simulating light sources, including Free-Electron Lasers, the increasing availability and lower cost of accelerated computing opens up new opportunities. This paper highlights a method of how accelerating and parallelizing code

  14. Impossibility results for distributed computing

    CERN Document Server

    Attiya, Hagit

    2014-01-01

    To understand the power of distributed systems, it is necessary to understand their inherent limitations: what problems cannot be solved in particular systems, or without sufficient resources (such as time or space). This book presents key techniques for proving such impossibility results and applies them to a variety of different problems in a variety of different system models. Insights gained from these results are highlighted, aspects of a problem that make it difficult are isolated, features of an architecture that make it inadequate for solving certain problems efficiently are identified

  15. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  16. LHCb: LHCb Distributed Computing Operations

    CERN Multimedia

    Stagni, F

    2011-01-01

    The proliferation of tools for monitoring both activities and infrastructure, together with the pressing need for prompt reaction in case of problems impacting data taking, data reconstruction, data reprocessing and user analysis brought to the need of better organizing the huge amount of information available. The monitoring system for the LHCb Grid Computing relies on many heterogeneous and independent sources of information offering different views for a better understanding of problems while an operations team and defined procedures have been put in place to handle them. This work summarizes the state-of-the-art of LHCb Grid operations emphasizing the reasons that brought to various choices and what are the tools currently in use to run our daily activities. We highlight the most common problems experienced across years of activities on the WLCG infrastructure, the services with their criticality, the procedures in place, the relevant metrics and the tools available and the ones still missing.

  17. Distributed computing by oblivious mobile robots

    CERN Document Server

    Flocchini, Paola; Santoro, Nicola

    2012-01-01

    The study of what can be computed by a team of autonomous mobile robots, originally started in robotics and AI, has become increasingly popular in theoretical computer science (especially in distributed computing), where it is now an integral part of the investigations on computability by mobile entities. The robots are identical computational entities located and able to move in a spatial universe; they operate without explicit communication and are usually unable to remember the past; they are extremely simple, with limited resources, and individually quite weak. However, collectively the ro

  18. Distributed computer systems theory and practice

    CERN Document Server

    Zedan, H S M

    2014-01-01

    Distributed Computer Systems: Theory and Practice is a collection of papers dealing with the design and implementation of operating systems, including distributed systems, such as the amoeba system, argus, Andrew, and grapevine. One paper discusses the concepts and notations for concurrent programming, particularly language notation used in computer programming, synchronization methods, and also compares three classes of languages. Another paper explains load balancing or load redistribution to improve system performance, namely, static balancing and adaptive load balancing. For program effici

  19. ATLAS Distributed Computing in LHC Run2

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  20. CRAB3: Establishing a new generation of services for distributed analysis at CMS

    CERN Document Server

    Spiga, Daniele

    2012-01-01

    In CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services, servicing the user tasks. The new gener...

  1. Mobile Agents in Networking and Distributed Computing

    CERN Document Server

    Cao, Jiannong

    2012-01-01

    The book focuses on mobile agents, which are computer programs that can autonomously migrate between network sites. This text introduces the concepts and principles of mobile agents, provides an overview of mobile agent technology, and focuses on applications in networking and distributed computing.

  2. A Software Rejuvenation Framework for Distributed Computing

    Science.gov (United States)

    Chau, Savio

    2009-01-01

    A performability-oriented conceptual framework for software rejuvenation has been constructed as a means of increasing levels of reliability and performance in distributed stateful computing. As used here, performability-oriented signifies that the construction of the framework is guided by the concept of analyzing the ability of a given computing system to deliver services with gracefully degradable performance. The framework is especially intended to support applications that involve stateful replicas of server computers.

  3. Distributed simulation of large computer systems

    International Nuclear Information System (INIS)

    Marzolla, M.

    2001-01-01

    Sequential simulation of large complex physical systems is often regarded as a computationally expensive task. In order to speed-up complex discrete-event simulations, the paradigm of Parallel and Distributed Discrete Event Simulation (PDES) has been introduced since the late 70s. The authors analyze the applicability of PDES to the modeling and analysis of large computer system; such systems are increasingly common in the area of High Energy and Nuclear Physics, because many modern experiments make use of large 'compute farms'. Some feasibility tests have been performed on a prototype distributed simulator

  4. Geant4 Hadronic Cascade Models and CMS Data Analysis : Computational Challenges in the LHC era

    CERN Document Server

    Heikkinen, Aatos

    This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we es...

  5. ATLAS Distributed Computing: Experience and Evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2013-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb-1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centers around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics program including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2014 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  6. ATLAS distributed computing: experience and evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2014-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25/fb of data. The total volume of beam and simulated data products exceeds 100~PB distributed across more than 150 computing centres around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics programme including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2015 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  7. Distributed quantum computing with single photon sources

    International Nuclear Information System (INIS)

    Beige, A.; Kwek, L.C.

    2005-01-01

    Full text: Distributed quantum computing requires the ability to perform nonlocal gate operations between the distant nodes (stationary qubits) of a large network. To achieve this, it has been proposed to interconvert stationary qubits with flying qubits. In contrast to this, we show that distributed quantum computing only requires the ability to encode stationary qubits into flying qubits but not the conversion of flying qubits into stationary qubits. We describe a scheme for the realization of an eventually deterministic controlled phase gate by performing measurements on pairs of flying qubits. Our scheme could be implemented with a linear optics quantum computing setup including sources for the generation of single photons on demand, linear optics elements and photon detectors. In the presence of photon loss and finite detector efficiencies, the scheme could be used to build large cluster states for one way quantum computing with a high fidelity. (author)

  8. An outlook of the user support model to educate the users community at the CMS Experiment

    CERN Document Server

    Malik, Sudhir

    2011-01-01

    The CMS (Compact Muon Solenoid) experiment is one of the two large general-purpose particle physics detectors built at the LHC (Large Hadron Collider) at CERN in Geneva, Switzerland. The diverse collaboration combined with a highly distributed computing environment and Petabytes/year of data being collected makes CMS unlike any other High Energy Physics collaborations before. This presents new challenges to educate and bring users, coming from different cultural, linguistics and social backgrounds, up to speed to contribute to the physics analysis. CMS has been able to deal with this new paradigm by deploying a user support structure model that uses collaborative tools to educate about software, computing an physics tools specific to CMS. To carry out the user support mission worldwide, an LHC Physics Centre (LPC) was created few years back at Fermilab as a hub for US physicists. The LPC serves as a "brick and mortar" location for physics excellence for the CMS physicists where graduate and postgraduate scien...

  9. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  10. Distributed computing environment for Mine Warfare Command

    OpenAIRE

    Pritchard, Lane L.

    1993-01-01

    Approved for public release; distribution is unlimited. The Mine Warfare Command in Charleston, South Carolina has been converting its information systems architecture from a centralized mainframe based system to a decentralized network of personal computers over the past several years. This thesis analyzes the progress Of the evolution as of May of 1992. The building blocks of a distributed architecture are discussed in relation to the choices the Mine Warfare Command has made to date. Ar...

  11. ATLAS distributed computing: experience and evolution

    International Nuclear Information System (INIS)

    Nairz, A

    2014-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb −1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centres around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics programme including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2015 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, energies and event complexities. An essential requirement will be the efficient utilisation of current and future processor technologies as well as a broad range of computing platforms, including supercomputing and cloud resources. We will report on experience gained thus far and our progress in preparing ATLAS computing for the future

  12. Intelligent Distributed Computing VI : Proceedings of the 6th International Symposium on Intelligent Distributed Computing

    CERN Document Server

    Badica, Costin; Malgeri, Michele; Unland, Rainer

    2013-01-01

    This book represents the combined peer-reviewed proceedings of the Sixth International Symposium on Intelligent Distributed Computing -- IDC~2012, of the International Workshop on Agents for Cloud -- A4C~2012 and of the Fourth International Workshop on Multi-Agent Systems Technology and Semantics -- MASTS~2012. All the events were held in Calabria, Italy during September 24-26, 2012. The 37 contributions published in this book address many topics related to theory and applications of intelligent distributed computing and multi-agent systems, including: adaptive and autonomous distributed systems, agent programming, ambient assisted living systems, business process modeling and verification, cloud computing, coalition formation, decision support systems, distributed optimization and constraint satisfaction, gesture recognition, intelligent energy management in WSNs, intelligent logistics, machine learning, mobile agents, parallel and distributed computational intelligence, parallel evolutionary computing, trus...

  13. 28 October 2013- Former US Vice President A. Gore signing the guest book with Technology Department Head F. Bordry, Head of International Relations R. Voss, Director for Research and Scientific Computing S. Bertolucci and CMS Collaboration Spokesperson J. Incandela.

    CERN Multimedia

    Maximilien Brice

    2013-01-01

    28 October 2013- Former US Vice President A. Gore signing the guest book with Technology Department Head F. Bordry, Head of International Relations R. Voss, Director for Research and Scientific Computing S. Bertolucci and CMS Collaboration Spokesperson J. Incandela.

  14. CMS Status

    International Nuclear Information System (INIS)

    Dobrzynski, L.

    2007-01-01

    The status of the construction and installation of CMS detector is reviewed. The 4T magnet is cold since end of February 2006. Its commissioning up to the nominal field started in July 2006 allowing a Cosmic Challenge in which elements of the final detector are involved. All big mechanical pieces equipped with muons chambers have been assembled in the surface hall SX5. Since mid July the detector is closed with commissioned HCAL, two ECAL supermodules and representative elements of the silicon tracker. The trigger system as well as the DAQ are tested. After the achievement of the physics TDR, CMS is now ready for the promising signal hunting. (author)

  15. Evolution of CMS Workload Management Towards Multicore Job Support

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Calero Yzquierdo, A. [Madrid, CIEMAT; Hernández, J. M. [Madrid, CIEMAT; Khan, F. A. [Quaid-i-Azam U.; Letts, J. [UC, San Diego; Majewski, K. [Fermilab; Rodrigues, A. M. [Fermilab; McCrea, A. [UC, San Diego; Vaandering, E. [Fermilab

    2015-12-23

    The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single and multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.

  16. ATLAS Distributed Computing in LHC Run2

    International Nuclear Information System (INIS)

    Campana, Simone

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run-2. An increase in both the data rate and the computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (Prodsys-2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward a flexible computing model. A flexible computing utilization exploring the use of opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model; the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover, a new data management strategy, based on a defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, an overview of an operational experience of the new system and its evolution is presented. (paper)

  17. Research computing in a distributed cloud environment

    International Nuclear Information System (INIS)

    Fransham, K; Agarwal, A; Armstrong, P; Bishop, A; Charbonneau, A; Desmarais, R; Hill, N; Gable, I; Gaudet, S; Goliath, S; Impey, R; Leavett-Brown, C; Ouellete, J; Paterson, M; Pritchet, C; Penfold-Brown, D; Podaima, W; Schade, D; Sobie, R J

    2010-01-01

    The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a user's job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.

  18. CMS Awards

    CERN Multimedia

    2004-01-01

    Ali Mohammad Rafiee receives the CMS Gold Award from Michel Della Negra of CMS. As part of the fifth annual CMS Awards, Iranian contractor HEPCO, located in Arak, an industrial town 200 km west of Tehran, received their Gold Award in a ceremony held on 14 June 2004 (the other award winners were reported in bulletin 13/2004). The Awards are given each year to a small number of the approximately one thousand contractors working on the CMS project. Gold Awards are given for outstanding technical achievement in work carried out for the detector. HEPCO received the Award for the excellent quality of their work in constructing two 25 tonne support tables, two 75 tonne shields (FCS) and eight supporting brackets to lower the HF into the cavern. Welds and machining obtained tolerances that were very difficult in structures of that size. Mr. A. M. Rafiee, the General Manager of the company, acknowledged the benefits of this collaboration, and thanked the efforts and skills of the many staff involved.

  19. CMS Detector Posters

    CERN Multimedia

    2016-01-01

    CMS Detector posters (produced in 2000): CMS installation CMS collaboration From the Big Bang to Stars LHC Magnetic Field Magnet System Trackering System Tracker Electronics Calorimetry Eletromagnetic Calorimeter Hadronic Calorimeter Muon System Muon Detectors Trigger and data aquisition (DAQ) ECAL posters (produced in 2010, FR & EN): CMS ECAL CMS ECAL-Supermodule cooling and mechatronics CMS ECAL-Supermodule assembly

  20. CMS resource utilization and limitations on the grid after the first two years of LHC collisions

    Energy Technology Data Exchange (ETDEWEB)

    Bagliesi, Giuseppe [Pisa U.; Bloom, Kenneth [Nebraska U.; Bonacorsi, Daniele [Bologna U.; Brew, Chris [Rutherford; Fisk, Ian [Fermilab; Flix, Jose [Madrid, CIEMAT; Kreuzer, Peter [Aachen, Tech. Hochsch.; Sciaba, Andrea [CERN

    2012-01-01

    After years of development, the CMS distributed computing system is now in full operation. The LHC continues to set records for operational performance, and CMS records data at more than 300 Hz. Because of the intensity of the beams, there are multiple proton-proton interactions per beam crossing, leading to ever-larger event sizes and processing times. The CMS computing system has responded admirably to these challenges, but some reoptimization of the computing model has been required to maximize the efficient delivery of data analysis results by the collaboration in the face of increasingly constrained computing resources. We present the current status of the system, describe the recent performance, and discuss the challenges ahead and how CMS intends to meet them.

  1. Cloud Computing as Evolution of Distributed Computing – A Case Study for SlapOS Distributed Cloud Computing Platform

    Directory of Open Access Journals (Sweden)

    George SUCIU

    2013-01-01

    Full Text Available The cloud computing paradigm has been defined from several points of view, the main two directions being either as an evolution of the grid and distributed computing paradigm, or, on the contrary, as a disruptive revolution in the classical paradigms of operating systems, network layers and web applications. This paper presents a distributed cloud computing platform called SlapOS, which unifies technologies and communication protocols into a new technology model for offering any application as a service. Both cloud and distributed computing can be efficient methods for optimizing resources that are aggregated from a grid of standard PCs hosted in homes, offices and small data centers. The paper fills a gap in the existing distributed computing literature by providing a distributed cloud computing model which can be applied for deploying various applications.

  2. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  3. Operation of the ATLAS distributed computing

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2018-01-01

    We describe the central operation of the ATLAS distributed computing system. The majority of compute intensive activities within ATLAS are carried out on some 350,000 CPU cores on the Grid, augmented by opportunistic usage of significant HPC and volunteer resources. The increasing scale, and challenging new payloads, demand fine-tuning of operational procedures together with timely developments of the production system. We describe several such developments, motivated directly from operational experience. Optimization of inefficient task requests, from both official production and users, is made possible by automatic detection of payload properties. User education, job shaping or preventative throttling help to increase the overall throughput of the available resources.

  4. Overview of the ATLAS distributed computing system

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration

    2018-01-01

    The CERN ATLAS experiment successfully uses a worldwide computing infrastructure to support the physics program during LHC Run 2. The grid workflow system PanDA routinely manages 250 to 500 thousand concurrently running production and analysis jobs to process simulation and detector data. In total more than 300 PB of data is distributed over more than 150 sites in the WLCG and handled by the ATLAS data management system Rucio. To prepare for the ever growing LHC luminosity in future runs new developments are underway to even more efficiently use opportunistic resources such as HPCs and utilize new technologies. This presentation will review and explain the outline and the performance of the ATLAS distributed computing system and give an outlook to new workflow and data management ideas for the beginning of the LHC Run 3.

  5. Decentralized Resource Management in Distributed Computer Systems.

    Science.gov (United States)

    1982-02-01

    directly exchanging user state information. Eventcounts and sequencers correspond to semaphores in the sense that synchronization primitives are used to...and techniques are required to achieve synchronization in distributed computers without reliance on any centralized entity such as a semaphore ...known solutions to the access synchronization problem was Dijkstra’s semaphore [12]. The importance of the semaphore is that it correctly addresses the

  6. ATLAS Distributed Computing: Its Central Services core

    CERN Document Server

    Lee, Christopher Jon; The ATLAS collaboration

    2018-01-01

    The ATLAS Distributed Computing (ADC) Project is responsible for the off-line processing of data produced by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. It facilitates data and workload management for ATLAS computing on the Worldwide LHC Computing Grid (WLCG). ADC Central Services operations (CSops)is a vital part of ADC, responsible for the deployment and configuration of services needed by ATLAS computing and operation of those services on CERN IT infrastructure, providing knowledge of CERN IT services to ATLAS service managers and developers, and supporting them in case of issues. Currently this entails the management of thirty seven different OpenStack projects, with more than five thousand cores allocated for these virtual machines, as well as overseeing the distribution of twenty nine petabytes of storage space in EOS for ATLAS. As the LHC begins to get ready for the next long shut-down, which will bring in many new upgrades to allow for more data to be captured by the on-line syste...

  7. Data analytics in the ATLAS Distributed Computing

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2015-01-01

    The ATLAS Data analytics effort is focused on creating systems which provide the ATLAS ADC with new capabilities for understanding distributed systems and overall operational performance. These capabilities include: warehousing information from multiple systems (the production and distributed analysis system - PanDA, the distributed data management system - Rucio, the file transfer system, various monitoring services etc. ); providing a platform to execute arbitrary data mining and machine learning algorithms over aggregated data; satisfy a variety of use cases for different user roles; host new third party analytics services on a scalable compute platform. We describe the implemented system where: data sources are existing RDBMS (Oracle) and Flume collectors; a Hadoop cluster is used to store the data; native Hadoop and Apache Pig scripts are used for data aggregation; and R for in-depth analytics. Part of the data is indexed in ElasticSearch so both simpler investigations and complex dashboards can be made ...

  8. Improving collaborative documentation in CMS

    International Nuclear Information System (INIS)

    Lassila-Perini, Kati; Salmi, Leena

    2010-01-01

    Complete and up-to-date documentation is essential for efficient data analysis in a large and complex collaboration like CMS. Good documentation reduces the time spent in problem solving for users and software developers. The scientists in our research environment do not necessarily have the interests or skills of professional technical writers. This results in inconsistencies in the documentation. To improve the quality, we have started a multidisciplinary project involving CMS user support and expertise in technical communication from the University of Turku, Finland. In this paper, we present possible approaches to study the usability of the documentation, for instance, usability tests conducted recently for the CMS software and computing user documentation.

  9. Distributed Computing for the Pierre Auger Observatory

    International Nuclear Information System (INIS)

    Chudoba, J.

    2015-01-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system. (paper)

  10. Distributed Computing for the Pierre Auger Observatory

    Science.gov (United States)

    Chudoba, J.

    2015-12-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.

  11. Distributed computer control system for reactor optimization

    International Nuclear Information System (INIS)

    Williams, A.H.

    1983-01-01

    At the Oldbury power station a prototype distributed computer control system has been installed. This system is designed to support research and development into improved reactor temperature control methods. This work will lead to the development and demonstration of new optimal control systems for improvement of plant efficiency and increase of generated output. The system can collect plant data from special test instrumentation connected to dedicated scanners and from the station's existing data processing system. The system can also, via distributed microprocessor-based interface units, make adjustments to the desired reactor channel gas exit temperatures. The existing control equipment will then adjust the height of control rods to maintain operation at these temperatures. The design of the distributed system is based on extensive experience with distributed systems for direct digital control, operator display and plant monitoring. The paper describes various aspects of this system, with particular emphasis on: (1) the hierarchal system structure; (2) the modular construction of the system to facilitate installation, commissioning and testing, and to reduce maintenance to module replacement; (3) the integration of the system into the station's existing data processing system; (4) distributed microprocessor-based interfaces to the reactor controls, with extensive security facilities implemented by hardware and software; (5) data transfer using point-to-point and bussed data links; (6) man-machine communication based on VDUs with computer input push-buttons and touch-sensitive screens; and (7) the use of a software system supporting a high-level engineer-orientated programming language, at all levels in the system, together with comprehensive data link management

  12. FF-EMU: a radiation tolerant ASIC for the distribution of timing, trigger and control signals in the CMS End-Cap Muon detector

    International Nuclear Information System (INIS)

    Campagnari, C; Costantino, N; Magazzù, G; Tongiani, Claudio

    2012-01-01

    A radiation tolerant integrated circuit for the distribution of clock, trigger and control signals in the Front-End electronics of the CMS End-Cap Muon detector has been developed in the IBM CMOS 130nm technology. The circuit houses transmitter and receiver interfaces to serial links implementing the FF-LYNX protocol that allows the integrated transmission of triggers and data frames with different latency constraints. Encoder and decoder modules associate signal transitions to FF-LYNX frames. The system and the ASIC architecture and behavior and the results of test and characterization of the ASIC prototypes will be presented.

  13. Pseudo-interactive monitoring in distributed computing

    International Nuclear Information System (INIS)

    Sfiligoi, I.; Bradley, D.; Livny, M.

    2009-01-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  14. Pseudo-interactive monitoring in distributed computing

    International Nuclear Information System (INIS)

    Sfiligoi, I; Bradley, D; Livny, M

    2010-01-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  15. Pseudo-interactive monitoring in distributed computing

    Energy Technology Data Exchange (ETDEWEB)

    Sfiligoi, I.; /Fermilab; Bradley, D.; Livny, M.; /Wisconsin U., Madison

    2009-05-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  16. The CMS tracker control system

    International Nuclear Information System (INIS)

    Dierlamm, A; Dirkes, G H; Fahrer, M; Frey, M; Hartmann, F; Masetti, L; Militaru, O; Shah, S Y; Stringer, R; Tsirou, A

    2008-01-01

    The Tracker Control System (TCS) is a distributed control software to operate about 2000 power supplies for the silicon modules of the CMS Tracker and monitor its environmental sensors. TCS must thus be able to handle about 10 4 power supply parameters, about 10 3 environmental probes from the Programmable Logic Controllers of the Tracker Safety System (TSS), about 10 5 parameters read via DAQ from the DCUs in all front end hybrids and from CCUs in all control groups. TCS is built on top of an industrial SCADA program (PVSS) extended with a framework developed at CERN (JCOP) and used by all LHC experiments. The logical partitioning of the detector is reflected in the hierarchical structure of the TCS, where commands move down to the individual hardware devices, while states are reported up to the root which is interfaced to the broader CMS control system. The system computes and continuously monitors the mean and maximum values of critical parameters and updates the percentage of currently operating hardware. Automatic procedures switch off selected parts of the detector using detailed granularity and avoiding widespread TSS intervention

  17. The CMS tracker control system

    Science.gov (United States)

    Dierlamm, A.; Dirkes, G. H.; Fahrer, M.; Frey, M.; Hartmann, F.; Masetti, L.; Militaru, O.; Shah, S. Y.; Stringer, R.; Tsirou, A.

    2008-07-01

    The Tracker Control System (TCS) is a distributed control software to operate about 2000 power supplies for the silicon modules of the CMS Tracker and monitor its environmental sensors. TCS must thus be able to handle about 104 power supply parameters, about 103 environmental probes from the Programmable Logic Controllers of the Tracker Safety System (TSS), about 105 parameters read via DAQ from the DCUs in all front end hybrids and from CCUs in all control groups. TCS is built on top of an industrial SCADA program (PVSS) extended with a framework developed at CERN (JCOP) and used by all LHC experiments. The logical partitioning of the detector is reflected in the hierarchical structure of the TCS, where commands move down to the individual hardware devices, while states are reported up to the root which is interfaced to the broader CMS control system. The system computes and continuously monitors the mean and maximum values of critical parameters and updates the percentage of currently operating hardware. Automatic procedures switch off selected parts of the detector using detailed granularity and avoiding widespread TSS intervention.

  18. Higher order correlations in computed particle distributions

    International Nuclear Information System (INIS)

    Hanerfeld, H.; Herrmannsfeldt, W.; Miller, R.H.

    1989-03-01

    The rms emittances calculated for beam distributions using computer simulations are frequently dominated by higher order aberrations. Thus there are substantial open areas in the phase space plots. It has long been observed that the rms emittance is not an invariant to beam manipulations. The usual emittance calculation removes the correlation between transverse displacement and transverse momentum. In this paper, we explore the possibility of defining higher order correlations that can be removed from the distribution to result in a lower limit to the realizable emittance. The intent is that by inserting the correct combinations of linear lenses at the proper position, the beam may recombine in a way that cancels the effects of some higher order forces. An example might be the non-linear transverse space charge forces which cause a beam to spread. If the beam is then refocused so that the same non-linear forces reverse the inward velocities, the resulting phase space distribution may reasonably approximate the original distribution. The approach to finding the location and strength of the proper lens to optimize the transported beam is based on work by Bruce Carlsten of Los Alamos National Laboratory. 11 refs., 4 figs

  19. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00068610; The ATLAS collaboration; Barberis, Dario; Crepe-Renaudin, Sabine Chrystel; De, Kaushik; Fassi, Farida; Stradling, Alden; Svatos, Michal; Vartapetian, Armen; Wolters, Helmut

    2017-01-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run 2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts’ workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run 1, this task was accomplished by a person of the expert team called the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run 2. The CRC position was proposed to cover some of the AMODs former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help with the training of future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing...

  20. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    CERN Document Server

    Adam Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts' workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run1, this task was accomplished by the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run2. The CRC position was proposed to cover some of the AMOD’s former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help train future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates ...

  1. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    CERN Document Server

    Molina-Perez, Jorge Amando

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS; the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator on duty at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is explo...

  2. CMS Results of Grid-related activities using the early deployed LCG Implementations

    CERN Document Server

    Coviello, Tommaso; De Filippis, Nicola; Donvito, Giacinto; Maggi, Giorgio; Pierro, A; Bonacorsi, Daniele; Capiluppi, Paolo; Fanfani, Alessandra; Grandi, Claudio; Maroney, Owen; Nebrensky, H; Donno, Flavia; Jank, Werner; Sciabà, Andrea; Sinanis, Nick; Colling, David; Tallini, Hugh; MacEvoy, Barry C; Wang, Shaowen; Kaiser, Joseph; Osman, Asif; Charlot, Claude; Semenjouk, I; Biasotto, Massimo; Fantinel, Sergio; Corvo, Marco; Fanzago, Federica; Mazzucato, Mirco; Verlato, Marco; Go, Apollo; Khan Chia Ming; Andreozzi, S; Cavalli, A; Ciaschini, V; Ghiselli, A; Italiano, A; Spataro, F; Vistoli, C; Tortone, G

    2004-01-01

    The CMS Experiment is defining its Computing Model and is experimenting and testing the new distributed features offered by many Grid Projects. This report describes use by CMS of the early-deployed systems of LCG (LCG-0 and LCG-1). Most of the used features here discussed came from the EU implemented middleware, even if some of the tested capabilities were in common with the US developed middleware. This report describes the simulation of about 2 million of CMS detector events, which were generated as part of the official CMS Data Challenge 04 (Pre-Challenge-Production). The simulations were done on a CMS-dedicated testbed (CMS-LCG-0), where an ad-hoc modified version of the LCG-0 middleware was deployed and where the CMS Experiment had a complete control, and on the official early LCG delivered system (with the LCG-1 version). Modifications to the CMS simulation tools for events produc tion where studied and achieved, together with necessary adaptations of the middleware services. Bilateral feedback (betwee...

  3. A multipurpose computing center with distributed resources

    Science.gov (United States)

    Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.

    2017-10-01

    The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.

  4. Real time computer system with distributed microprocessors

    International Nuclear Information System (INIS)

    Heger, D.; Steusloff, H.; Syrbe, M.

    1979-01-01

    The usual centralized structure of computer systems, especially of process computer systems, cannot sufficiently use the progress of very large-scale integrated semiconductor technology with respect to increasing the reliability and performance and to decreasing the expenses especially of the external periphery. This and the increasing demands on process control systems has led the authors to generally examine the structure of such systems and to adapt it to the new surroundings. Computer systems with distributed, optical fibre-coupled microprocessors allow a very favourable problem-solving with decentralized controlled buslines and functional redundancy with automatic fault diagnosis and reconfiguration. A fit programming system supports these hardware properties: PEARL for multicomputer systems, dynamic loader, processor and network operating system. The necessary design principles for this are proved mainly theoretically and by value analysis. An optimal overall system of this new generation of process control systems was established, supported by results of 2 PDV projects (modular operating systems, input/output colour screen system as control panel), for the purpose of testing by apllying the system for the control of 28 pit furnaces of a steel work. (orig.) [de

  5. CMS Wallet Card

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Wallet Card is a quick reference statistical summary on annual CMS program and financial data. The CMS Wallet Card is available for each year from 2004...

  6. CMS Fast Facts

    Data.gov (United States)

    U.S. Department of Health & Human Services — CMS has developed a new quick reference statistical summary on annual CMS program and financial data. CMS Fast Facts includes summary information on total program...

  7. 78 FR 48170 - Privacy Act of 1974; CMS Computer Match No. 2013-12; HHS Computer Match No. 1307; SSA Computer...

    Science.gov (United States)

    2013-08-07

    ....hhs.gov . SUPPLEMENTARY INFORMATION: The Computer Matching and Privacy Protection Act of 1988 (Public... computer matching involving Federal agencies could be performed and adding certain protections for... Affordability Programs under the Patient Protection and Affordable Care Act''. SECURITY CLASSIFICATION...

  8. CMS Centres Worldwide - a New Collaborative Infrastructure

    International Nuclear Information System (INIS)

    Taylor, Lucas

    2011-01-01

    The CMS Experiment at the LHC has established a network of more than fifty inter-connected 'CMS Centres' at CERN and in institutes in the Americas, Asia, Australasia, and Europe. These facilities are used by people doing CMS detector and computing grid operations, remote shifts, data quality monitoring and analysis, as well as education and outreach. We present the computing, software, and collaborative tools and videoconferencing systems. These include permanently running 'telepresence' video links (hardware-based H.323, EVO and Vidyo), Webcasts, and generic Web tools such as CMS-TV for broadcasting live monitoring and outreach information. Being Web-based and experiment-independent, these systems could easily be extended to other organizations. We describe the experiences of using CMS Centres Worldwide in the CMS data-taking operations as well as for major media events with several hundred TV channels, radio stations, and many more press journalists simultaneously around the world.

  9. 10th International Symposium on Intelligent Distributed Computing

    CERN Document Server

    Seghrouchni, Amal; Beynier, Aurélie; Camacho, David; Herpson, Cédric; Hindriks, Koen; Novais, Paulo

    2017-01-01

    This book presents the combined peer-reviewed proceedings of the tenth International Symposium on Intelligent Distributed Computing (IDC’2016), which was held in Paris, France from October 10th to 12th, 2016. The 23 contributions address a range of topics related to theory and application of intelligent distributed computing, including: Intelligent Distributed Agent-Based Systems, Ambient Intelligence and Social Networks, Computational Sustainability, Intelligent Distributed Knowledge Representation and Processing, Smart Networks, Networked Intelligence and Intelligent Distributed Applications, amongst others.

  10. An Applet-based Anonymous Distributed Computing System.

    Science.gov (United States)

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael

    2001-01-01

    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)

  11. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    Energy Technology Data Exchange (ETDEWEB)

    Molina-Perez, J. [UC, San Diego; Bonacorsi, D. [Bologna U.; Gutsche, O. [Fermilab; Sciaba, A. [CERN; Flix, J. [Madrid, CIEMAT; Kreuzer, P. [CERN; Fajardo, E. [Andes U., Bogota; Boccali, T. [INFN, Pisa; Klute, M. [MIT; Gomes, D. [Rio de Janeiro State U.; Kaselis, R. [Vilnius U.; Du, R. [Beijing, Inst. High Energy Phys.; Magini, N. [CERN; Butenas, I. [Vilnius U.; Wang, W. [Beijing, Inst. High Energy Phys.

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.

  12. Monitoring techniques and alarm procedures for CMS Services and Sites in WLCG

    International Nuclear Information System (INIS)

    Molina-Perez, J; Sciabà, A; Magini, N; Bonacorsi, D; Gutsche, O; Flix, J; Kreuzer, P; Fajardo, E; Boccali, T; Klute, M; Gomes, D; Kaselis, R; Butenas, I; Du, R; Wang, W

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS; the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.

  13. Xcache in the ATLAS Distributed Computing Environment

    CERN Document Server

    Hanushevsky, Andrew; The ATLAS collaboration

    2018-01-01

    Built upon the Xrootd Proxy Cache (Xcache), we developed additional features to adapt the ATLAS distributed computing and data environment, especially its data management system RUCIO, to help improve the cache hit rate, as well as features that make the Xcache easy to use, similar to the way the Squid cache is used by the HTTP protocol. We are optimizing Xcache for the HPC environments, and adapting the HL-LHC Data Lakes design as its component for data delivery. We packaged the software in CVMFS, in Docker and Singularity containers in order to standardize the deployment and reduce the cost to resolve issues at remote sites. We are also integrating it into RUCIO as a volatile storage systems, and into various ATLAS workflow such as user analysis,

  14. Distributed computer controls for accelerator systems

    International Nuclear Information System (INIS)

    Moore, T.L.

    1988-09-01

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multi-user Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implementation with four months with a computer and instrumentation cost of approximately $100K. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking and operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the efficient implementation of the system. The definition, design, implementation, operation and total system performance will be discussed. 3 refs

  15. Distributed computer controls for accelerator systems

    Science.gov (United States)

    Moore, T. L.

    1989-04-01

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multiuser Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implantation within four months with a computer and instrumentation cost of approximately $100k. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking of operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the effective implementation of the system. The definition, design, implementation, operation and total system performance will be discussed.

  16. Distributed computer controls for accelerator systems

    International Nuclear Information System (INIS)

    Moore, T.L.

    1989-01-01

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multiuser Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implantation within four months with a computer and instrumentation cost of approximately $100k. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking of operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the effective implementation of the system. The definition, design, implementation, operation and total system performance will be discussed. (orig.)

  17. Measurement of pseudorapidity distributions of charged particles in proton-proton collisions at $\\sqrt{s}$ = 8 TeV by the CMS and TOTEM experiments

    CERN Document Server

    Chatrchyan, Serguei; Sirunyan, Albert M; Tumasyan, Armen; Adam, Wolfgang; Bergauer, Thomas; Dragicevic, Marko; Erö, Janos; Fabjan, Christian; Friedl, Markus; Fruehwirth, Rudolf; Ghete, Vasile Mihai; Hartl, Christian; Hörmann, Natascha; Hrubec, Josef; Jeitler, Manfred; Kiesenhofer, Wolfgang; Knünz, Valentin; Krammer, Manfred; Krätschmer, Ilse; Liko, Dietrich; Mikulec, Ivan; Rabady, Dinyar; Rahbaran, Babak; Rohringer, Herbert; Schöfbeck, Robert; Strauss, Josef; Taurok, Anton; Treberer-Treberspurg, Wolfgang; Waltenberger, Wolfgang; Wulz, Claudia-Elisabeth; Mossolov, Vladimir; Shumeiko, Nikolai; Suarez Gonzalez, Juan; Alderweireldt, Sara; Bansal, Monika; Bansal, Sunil; Cornelis, Tom; De Wolf, Eddi A; Janssen, Xavier; Knutsson, Albert; Luyckx, Sten; Mucibello, Luca; Ochesanu, Silvia; Roland, Benoit; Rougny, Romain; Van Haevermaet, Hans; Van Mechelen, Pierre; Van Remortel, Nick; Van Spilbeeck, Alex; Blekman, Freya; Blyweert, Stijn; D'Hondt, Jorgen; Heracleous, Natalie; Kalogeropoulos, Alexis; Keaveney, James; Kim, Tae Jeong; Lowette, Steven; Maes, Michael; Olbrechts, Annik; Strom, Derek; Tavernier, Stefaan; Van Doninck, Walter; Van Mulders, Petra; Van Onsem, Gerrit Patrick; Villella, Ilaria; Caillol, Cécile; Clerbaux, Barbara; De Lentdecker, Gilles; Favart, Laurent; Gay, Arnaud; Léonard, Alexandre; Marage, Pierre Edouard; Mohammadi, Abdollah; Perniè, Luca; Reis, Thomas; Seva, Tomislav; Thomas, Laurent; Vander Velde, Catherine; Vanlaer, Pascal; Wang, Jian; Adler, Volker; Beernaert, Kelly; Benucci, Leonardo; Cimmino, Anna; Costantini, Silvia; Dildick, Sven; Garcia, Guillaume; Klein, Benjamin; Lellouch, Jérémie; Mccartin, Joseph; Ocampo Rios, Alberto Andres; Ryckbosch, Dirk; Salva Diblen, Sinem; Sigamani, Michael; Strobbe, Nadja; Thyssen, Filip; Tytgat, Michael; Walsh, Sinead; Yazgan, Efe; Zaganidis, Nicolas; Basegmez, Suzan; Beluffi, Camille; Bruno, Giacomo; Castello, Roberto; Caudron, Adrien; Ceard, Ludivine; Da Silveira, Gustavo Gil; Delaere, Christophe; Du Pree, Tristan; Favart, Denis; Forthomme, Laurent; Giammanco, Andrea; Hollar, Jonathan; Jez, Pavel; Komm, Matthias; Lemaitre, Vincent; Liao, Junhui; Militaru, Otilia; Nuttens, Claude; Pagano, Davide; Pin, Arnaud; Piotrzkowski, Krzysztof; Popov, Andrey; Quertenmont, Loic; Selvaggi, Michele; Vidal Marono, Miguel; Vizan Garcia, Jesus Manuel; Beliy, Nikita; Caebergs, Thierry; Daubie, Evelyne; Hammad, Gregory Habib; Alves, Gilvan; Correa Martins Junior, Marcos; Dos Reis Martins, Thiago; Pol, Maria Elena; Henrique Gomes E Souza, Moacyr; Aldá Júnior, Walter Luiz; Carvalho, Wagner; Chinellato, Jose; Custódio, Analu; Melo Da Costa, Eliza; De Jesus Damiao, Dilson; De Oliveira Martins, Carley; Fonseca De Souza, Sandro; Malbouisson, Helena; Malek, Magdalena; Matos Figueiredo, Diego; Mundim, Luiz; Nogima, Helio; Prado Da Silva, Wanda Lucia; Santaolalla, Javier; Santoro, Alberto; Sznajder, Andre; Tonelli Manganote, Edmilson José; Vilela Pereira, Antonio; Bernardes, Cesar Augusto; De Almeida Dias, Flavia; Tomei, Thiago; De Moraes Gregores, Eduardo; Mercadante, Pedro G; Novaes, Sergio F; Padula, Sandra; Genchev, Vladimir; Iaydjiev, Plamen; Marinov, Andrey; Piperov, Stefan; Rodozov, Mircho; Sultanov, Georgi; Vutova, Mariana; Dimitrov, Anton; Glushkov, Ivan; Hadjiiska, Roumyana; Kozhuharov, Venelin; Litov, Leander; Pavlov, Borislav; Petkov, Peicho; Bian, Jian-Guo; Chen, Guo-Ming; Chen, He-Sheng; Chen, Mingshui; Du, Ran; Jiang, Chun-Hua; Liang, Dong; Liang, Song; Meng, Xiangwei; Plestina, Roko; Tao, Junquan; Wang, Xianyou; Wang, Zheng; Asawatangtrakuldee, Chayanit; Ban, Yong; Guo, Yifei; Li, Qiang; Li, Wenbo; Liu, Shuai; Mao, Yajun; Qian, Si-Jin; Wang, Dayong; Zhang, Linlin; Zou, Wei; Avila, Carlos; Carrillo Montoya, Camilo Andres; Chaparro Sierra, Luisa Fernanda; Florez, Carlos; Gomez, Juan Pablo; Gomez Moreno, Bernardo; Sanabria, Juan Carlos; Godinovic, Nikola; Lelas, Damir; Polic, Dunja; Puljak, Ivica; Antunovic, Zeljko; Kovac, Marko; Brigljevic, Vuko; Kadija, Kreso; Luetic, Jelena; Mekterovic, Darko; Morovic, Srecko; Sudic, Lucija; Attikis, Alexandros; Mavromanolakis, Georgios; Mousa, Jehad; Nicolaou, Charalambos; Ptochos, Fotios; Razis, Panos A; Finger, Miroslav; Finger Jr, Michael; Abdelalim, Ahmed Ali; Assran, Yasser; Elgammal, Sherif; Ellithi Kamel, Ali; Mahmoud, Mohammed; Radi, Amr; Kadastik, Mario; Müntel, Mait; Murumaa, Marion; Raidal, Martti; Rebane, Liis; Tiko, Andres; Eerola, Paula; Fedi, Giacomo; Voutilainen, Mikko; Härkönen, Jaakko; Karimäki, Veikko; Kinnunen, Ritva; Kortelainen, Matti J; Lampén, Tapio; Lassila-Perini, Kati; Lehti, Sami; Lindén, Tomas; Luukka, Panja-Riina; Mäenpää, Teppo; Peltola, Timo; Tuominen, Eija; Tuominiemi, Jorma; Tuovinen, Esa; Wendland, Lauri; Tuuva, Tuure; Besancon, Marc; Couderc, Fabrice; Dejardin, Marc; Denegri, Daniel; Fabbro, Bernard; Faure, Jean-Louis; Ferri, Federico; Ganjour, Serguei; Givernaud, Alain; Gras, Philippe; Hamel de Monchenault, Gautier; Jarry, Patrick; Locci, Elizabeth; Malcles, Julie; Nayak, Aruna; Rander, John; Rosowsky, André; Titov, Maksym; Baffioni, Stephanie; Beaudette, Florian; Busson, Philippe; Charlot, Claude; Daci, Nadir; Dahms, Torsten; Dalchenko, Mykhailo; Dobrzynski, Ludwik; Florent, Alice; Granier de Cassagnac, Raphael; Miné, Philippe; Mironov, Camelia; Naranjo, Ivo Nicolas; Nguyen, Matthew; Ochando, Christophe; Paganini, Pascal; Sabes, David; Salerno, Roberto; Sauvan, Jean-baptiste; Sirois, Yves; Veelken, Christian; Yilmaz, Yetkin; Zabi, Alexandre; Agram, Jean-Laurent; Andrea, Jeremy; Bloch, Daniel; Brom, Jean-Marie; Chabert, Eric Christian; Collard, Caroline; Conte, Eric; Drouhin, Frédéric; Fontaine, Jean-Charles; Gelé, Denis; Goerlach, Ulrich; Goetzmann, Christophe; Juillot, Pierre; Le Bihan, Anne-Catherine; Van Hove, Pierre; Gadrat, Sébastien; Beauceron, Stephanie; Beaupere, Nicolas; Boudoul, Gaelle; Brochet, Sébastien; Chasserat, Julien; Chierici, Roberto; Contardo, Didier; Depasse, Pierre; El Mamouni, Houmani; Fan, Jiawei; Fay, Jean; Gascon, Susan; Gouzevitch, Maxime; Ille, Bernard; Kurca, Tibor; Lethuillier, Morgan; Mirabito, Laurent; Perries, Stephane; Ruiz Alvarez, José David; Sgandurra, Louis; Sordini, Viola; Vander Donckt, Muriel; Verdier, Patrice; Viret, Sébastien; Xiao, Hong; Tsamalaidze, Zviad; Autermann, Christian; Beranek, Sarah; Bontenackels, Michael; Calpas, Betty; Edelhoff, Matthias; Feld, Lutz; Hindrichs, Otto; Klein, Katja; Ostapchuk, Andrey; Perieanu, Adrian; Raupach, Frank; Sammet, Jan; Schael, Stefan; Sprenger, Daniel; Weber, Hendrik; Wittmer, Bruno; Zhukov, Valery; Ata, Metin; Caudron, Julien; Dietz-Laursonn, Erik; Duchardt, Deborah; Erdmann, Martin; Fischer, Robert; Güth, Andreas; Hebbeker, Thomas; Heidemann, Carsten; Hoepfner, Kerstin; Klingebiel, Dennis; Knutzen, Simon; Kreuzer, Peter; Merschmeyer, Markus; Meyer, Arnd; Olschewski, Mark; Padeken, Klaas; Papacz, Paul; Reithler, Hans; Schmitz, Stefan Antonius; Sonnenschein, Lars; Teyssier, Daniel; Thüer, Sebastian; Weber, Martin; Cherepanov, Vladimir; Erdogan, Yusuf; Flügge, Günter; Geenen, Heiko; Geisler, Matthias; Haj Ahmad, Wael; Hoehle, Felix; Kargoll, Bastian; Kress, Thomas; Kuessel, Yvonne; Lingemann, Joschka; Nowack, Andreas; Nugent, Ian Michael; Perchalla, Lars; Pooth, Oliver; Stahl, Achim; Asin, Ivan; Bartosik, Nazar; Behr, Joerg; Behrenhoff, Wolf; Behrens, Ulf; Bell, Alan James; Bergholz, Matthias; Bethani, Agni; Borras, Kerstin; Burgmeier, Armin; Cakir, Altan; Calligaris, Luigi; Campbell, Alan; Choudhury, Somnath; Costanza, Francesco; Diez Pardos, Carmen; Dooling, Samantha; Dorland, Tyler; Eckerlin, Guenter; Eckstein, Doris; Eichhorn, Thomas; Flucke, Gero; Geiser, Achim; Grebenyuk, Anastasia; Gunnellini, Paolo; Habib, Shiraz; Hauk, Johannes; Hellwig, Gregor; Hempel, Maria; Horton, Dean; Jung, Hannes; Kasemann, Matthias; Katsas, Panagiotis; Kieseler, Jan; Kleinwort, Claus; Krämer, Mira; Krücker, Dirk; Lange, Wolfgang; Leonard, Jessica; Lipka, Katerina; Lohmann, Wolfgang; Lutz, Benjamin; Mankel, Rainer; Marfin, Ihar; Melzer-Pellmann, Isabell-Alissandra; Meyer, Andreas Bernhard; Mnich, Joachim; Mussgiller, Andreas; Naumann-Emme, Sebastian; Novgorodova, Olga; Nowak, Friederike; Perrey, Hanno; Petrukhin, Alexey; Pitzl, Daniel; Placakyte, Ringaile; Raspereza, Alexei; Ribeiro Cipriano, Pedro M; Riedl, Caroline; Ron, Elias; Sahin, Mehmet Özgür; Salfeld-Nebgen, Jakob; Saxena, Pooja; Schmidt, Ringo; Schoerner-Sadenius, Thomas; Schröder, Matthias; Stein, Matthias; Vargas Trevino, Andrea Del Rocio; Walsh, Roberval; Wissing, Christoph; Aldaya Martin, Maria; Blobel, Volker; Enderle, Holger; Erfle, Joachim; Garutti, Erika; Goebel, Kristin; Görner, Martin; Gosselink, Martijn; Haller, Johannes; Höing, Rebekka Sophie; Kirschenmann, Henning; Klanner, Robert; Kogler, Roman; Lange, Jörn; Lapsien, Tobias; Lenz, Teresa; Marchesini, Ivan; Ott, Jochen; Peiffer, Thomas; Pietsch, Niklas; Rathjens, Denis; Sander, Christian; Schettler, Hannes; Schleper, Peter; Schlieckau, Eike; Schmidt, Alexander; Seidel, Markus; Sibille, Jennifer; Sola, Valentina; Stadie, Hartmut; Steinbrück, Georg; Troendle, Daniel; Usai, Emanuele; Vanelderen, Lukas; Barth, Christian; Baus, Colin; Berger, Joram; Böser, Christian; Butz, Erik; Chwalek, Thorsten; De Boer, Wim; Descroix, Alexis; Dierlamm, Alexander; Feindt, Michael; Guthoff, Moritz; Hartmann, Frank; Hauth, Thomas; Held, Hauke; Hoffmann, Karl-Heinz; Husemann, Ulrich; Katkov, Igor; Kornmayer, Andreas; Kuznetsova, Ekaterina; Lobelle Pardo, Patricia; Martschei, Daniel; Mozer, Matthias Ulrich; Müller, Thomas; Niegel, Martin; Nürnberg, Andreas; Oberst, Oliver; Quast, Gunter; Rabbertz, Klaus; Ratnikov, Fedor; Röcker, Steffen; Schilling, Frank-Peter; Schott, Gregory; Simonis, Hans-Jürgen; Stober, Fred-Markus Helmut; Ulrich, Ralf; Wagner-Kuhr, Jeannine; Wayand, Stefan; Weiler, Thomas; Wolf, Roger; Zeise, Manuel; Anagnostou, Georgios; Daskalakis, Georgios; Geralis, Theodoros; Kesisoglou, Stilianos; Kyriakis, Aristotelis; Loukas, Demetrios; Markou, Athanasios; Markou, Christos; Ntomari, Eleni; Psallidas, Andreas; Topsis-Giotis, Iasonas; Gouskos, Loukas; Panagiotou, Apostolos; Saoulidou, Niki; Stiliaris, Efstathios; Aslanoglou, Xenofon; Evangelou, Ioannis; Flouris, Giannis; Foudas, Costas; Jones, John; Kokkas, Panagiotis; Manthos, Nikolaos; Papadopoulos, Ioannis; Paradas, Evangelos; Bencze, Gyorgy; Hajdu, Csaba; Hidas, Pàl; Horvath, Dezso; Sikler, Ferenc; Veszpremi, Viktor; Vesztergombi, Gyorgy; Zsigmond, Anna Julia; Beni, Noemi; Czellar, Sandor; Molnar, Jozsef; Palinkas, Jozsef; Szillasi, Zoltan; Karancsi, János; Raics, Peter; Trocsanyi, Zoltan Laszlo; Ujvari, Balazs; Swain, Sanjay Kumar; Beri, Suman Bala; Bhatnagar, Vipin; Dhingra, Nitish; Gupta, Ruchi; Kaur, Manjit; Mehta, Manuk Zubin; Mittal, Monika; Nishu, Nishu; Sharma, Archana; Singh, Jasbir; Kumar, Ashok; Kumar, Arun; Ahuja, Sudha; Bhardwaj, Ashutosh; Choudhary, Brajesh C; Kumar, Ajay; Malhotra, Shivali; Naimuddin, Md; Ranjan, Kirti; Sharma, Varun; Shivpuri, Ram Krishen; Banerjee, Sunanda; Bhattacharya, Satyaki; Chatterjee, Kalyanmoy; Dutta, Suchandra; Gomber, Bhawna; Jain, Sandhya; Jain, Shilpi; Khurana, Raman; Modak, Atanu; Mukherjee, Swagata; Roy, Debarati; Sarkar, Subir; Sharan, Manoj; Singh, Anil; Abdulsalam, Abdulla; Dutta, Dipanwita; Kailas, Swaminathan; Kumar, Vineet; Mohanty, Ajit Kumar; Pant, Lalit Mohan; Shukla, Prashant; Topkar, Anita; Aziz, Tariq; Chatterjee, Rajdeep Mohan; Ganguly, Sanmay; Ghosh, Saranya; Guchait, Monoranjan; Gurtu, Atul; Kole, Gouranga; Kumar, Sanjeev; Maity, Manas; Majumder, Gobinda; Mazumdar, Kajari; Mohanty, Gagan Bihari; Parida, Bibhuti; Sudhakar, Katta; Wickramage, Nadeesha; Banerjee, Sudeshna; Dugad, Shashikant; Arfaei, Hessamaddin; Bakhshiansohi, Hamed; Behnamian, Hadi; Etesami, Seyed Mohsen; Fahim, Ali; Jafari, Abideh; Khakzad, Mohsen; Mohammadi Najafabadi, Mojtaba; Naseri, Mohsen; Paktinat Mehdiabadi, Saeid; Safarzadeh, Batool; Zeinali, Maryam; Grunewald, Martin; Abbrescia, Marcello; Barbone, Lucia; Calabria, Cesare; Chhibra, Simranjit Singh; Colaleo, Anna; Creanza, Donato; De Filippis, Nicola; De Palma, Mauro; Fiore, Luigi; Iaselli, Giuseppe; Maggi, Giorgio; Maggi, Marcello; Marangelli, Bartolomeo; My, Salvatore; Nuzzo, Salvatore; Pacifico, Nicola; Pompili, Alexis; Pugliese, Gabriella; Radogna, Raffaella; Selvaggi, Giovanna; Silvestris, Lucia; Singh, Gurpreet; Venditti, Rosamaria; Verwilligen, Piet; Zito, Giuseppe; Abbiendi, Giovanni; Benvenuti, Alberto; Bonacorsi, Daniele; Braibant-Giacomelli, Sylvie; Brigliadori, Luca; Campanini, Renato; Capiluppi, Paolo; Castro, Andrea; Cavallo, Francesca Romana; Codispoti, Giuseppe; Cuffiani, Marco; Dallavalle, Gaetano-Marco; Fabbri, Fabrizio; Fanfani, Alessandra; Fasanella, Daniele; Giacomelli, Paolo; Grandi, Claudio; Guiducci, Luigi; Marcellini, Stefano; Masetti, Gianni; Meneghelli, Marco; Montanari, Alessandro; Navarria, Francesco; Odorici, Fabrizio; Perrotta, Andrea; Primavera, Federica; Rossi, Antonio; Rovelli, Tiziano; Siroli, Gian Piero; Tosi, Nicolò; Travaglini, Riccardo; Albergo, Sebastiano; Cappello, Gigi; Chiorboli, Massimiliano; Costa, Salvatore; Giordano, Ferdinando; Potenza, Renato; Tricomi, Alessia; Tuve, Cristina; Barbagli, Giuseppe; Ciulli, Vitaliano; Civinini, Carlo; D'Alessandro, Raffaello; Focardi, Ettore; Gallo, Elisabetta; Gonzi, Sandro; Gori, Valentina; Lenzi, Piergiulio; Meschini, Marco; Paoletti, Simone; Sguazzoni, Giacomo; Tropiano, Antonio; Benussi, Luigi; Bianco, Stefano; Fabbri, Franco; Piccolo, Davide; Fabbricatore, Pasquale; Ferretti, Roberta; Ferro, Fabrizio; Lo Vetere, Maurizio; Musenich, Riccardo; Robutti, Enrico; Tosi, Silvano; Benaglia, Andrea; Dinardo, Mauro Emanuele; Fiorendi, Sara; Gennai, Simone; Gerosa, Raffaele; Ghezzi, Alessio; Govoni, Pietro; Lucchini, Marco Toliman; Malvezzi, Sandra; Manzoni, Riccardo Andrea; Martelli, Arabella; Marzocchi, Badder; Menasce, Dario; Moroni, Luigi; Paganoni, Marco; Pedrini, Daniele; Ragazzi, Stefano; Redaelli, Nicola; Tabarelli de Fatis, Tommaso; Buontempo, Salvatore; Cavallo, Nicola; Fabozzi, Francesco; Iorio, Alberto Orso Maria; Lista, Luca; Meola, Sabino; Merola, Mario; Paolucci, Pierluigi; Azzi, Patrizia; Bacchetta, Nicola; Branca, Antonio; Carlin, Roberto; Checchia, Paolo; Dorigo, Tommaso; Dosselli, Umberto; Galanti, Mario; Gasparini, Fabrizio; Gasparini, Ugo; Giubilato, Piero; Gozzelino, Andrea; Kanishchev, Konstantin; Lacaprara, Stefano; Lazzizzera, Ignazio; Margoni, Martino; Meneguzzo, Anna Teresa; Pazzini, Jacopo; Pegoraro, Matteo; Pozzobon, Nicola; Ronchese, Paolo; Simonetto, Franco; Torassa, Ezio; Tosi, Mia; Triossi, Andrea; Ventura, Sandro; Zotto, Pierluigi; Zucchetta, Alberto; Zumerle, Gianni; Gabusi, Michele; Ratti, Sergio P; Riccardi, Cristina; Vitulo, Paolo; Biasini, Maurizio; Bilei, Gian Mario; Fanò, Livio; Lariccia, Paolo; Mantovani, Giancarlo; Menichelli, Mauro; Romeo, Francesco; Saha, Anirban; Santocchia, Attilio; Spiezia, Aniello; Androsov, Konstantin; Azzurri, Paolo; Bagliesi, Giuseppe; Bernardini, Jacopo; Boccali, Tommaso; Broccolo, Giuseppe; Castaldi, Rino; Ciocci, Maria Agnese; Dell'Orso, Roberto; Fiori, Francesco; Foà, Lorenzo; Giassi, Alessandro; Grippo, Maria Teresa; Kraan, Aafke; Ligabue, Franco; Lomtadze, Teimuraz; Martini, Luca; Messineo, Alberto; Moon, Chang-Seong; Palla, Fabrizio; Rizzi, Andrea; Savoy-Navarro, Aurore; Serban, Alin Titus; Spagnolo, Paolo; Squillacioti, Paola; Tenchini, Roberto; Tonelli, Guido; Venturi, Andrea; Verdini, Piero Giorgio; Vernieri, Caterina; Barone, Luciano; Cavallari, Francesca; Del Re, Daniele; Diemoz, Marcella; Grassi, Marco; Jorda, Clara; Longo, Egidio; Margaroli, Fabrizio; Meridiani, Paolo; Micheli, Francesco; Nourbakhsh, Shervin; Organtini, Giovanni; Paramatti, Riccardo; Rahatlou, Shahram; Rovelli, Chiara; Soffi, Livia; Traczyk, Piotr; Amapane, Nicola; Arcidiacono, Roberta; Argiro, Stefano; Arneodo, Michele; Bellan, Riccardo; Biino, Cristina; Cartiglia, Nicolo; Casasso, Stefano; Costa, Marco; Degano, Alessandro; Demaria, Natale; Mariotti, Chiara; Maselli, Silvia; Migliore, Ernesto; Monaco, Vincenzo; Musich, Marco; Obertino, Maria Margherita; Ortona, Giacomo; Pacher, Luca; Pastrone, Nadia; Pelliccioni, Mario; Potenza, Alberto; Romero, Alessandra; Ruspa, Marta; Sacchi, Roberto; Solano, Ada; Staiano, Amedeo; Tamponi, Umberto; Belforte, Stefano; Candelise, Vieri; Casarsa, Massimo; Cossutti, Fabio; Della Ricca, Giuseppe; Gobbo, Benigno; La Licata, Chiara; Marone, Matteo; Montanino, Damiana; Penzo, Aldo; Schizzi, Andrea; Umer, Tomo; Zanetti, Anna; Chang, Sunghyun; Kim, Tae Yeon; Nam, Soon-Kwon; Kim, Dong Hee; Kim, Gui Nyun; Kim, Ji Eun; Kim, Min Suk; Kong, Dae Jung; Lee, Sangeun; Oh, Young Do; Park, Hyangkyu; Son, Dong-Chul; Kim, Jae Yool; Kim, Zero Jaeho; Song, Sanghyeon; Choi, Suyong; Gyun, Dooyeon; Hong, Byung-Sik; Jo, Mihee; Kim, Hyunchul; Kim, Yongsun; Lee, Kyong Sei; Park, Sung Keun; Roh, Youn; Choi, Minkyoo; Kim, Ji Hyun; Park, Chawon; Park, Inkyu; Park, Sangnam; Ryu, Geonmo; Choi, Young-Il; Choi, Young Kyu; Goh, Junghwan; Kwon, Eunhyang; Lee, Byounghoon; Lee, Jongseok; Seo, Hyunkwan; Yu, Intae; Juodagalvis, Andrius; Komaragiri, Jyothsna Rani; Castilla-Valdez, Heriberto; De La Cruz-Burelo, Eduard; Heredia-de La Cruz, Ivan; Lopez-Fernandez, Ricardo; Martínez-Ortega, Jorge; Sánchez Hernández, Alberto; Villasenor-Cendejas, Luis Manuel; Carrillo Moreno, Salvador; Vazquez Valencia, Fabiola; Salazar Ibarguen, Humberto Antonio; Casimiro Linares, Edgar; Morelos Pineda, Antonio; Krofcheck, David; Butler, Philip H; Doesburg, Robert; Reucroft, Steve; Ahmad, Muhammad; Asghar, Muhammad Irfan; Butt, Jamila; Hoorani, Hafeez R; Khan, Wajid Ali; Khurshid, Taimoor; Qazi, Shamona; Shah, Mehar Ali; Shoaib, Muhammad; Bialkowska, Helena; Bluj, Michal; Boimska, Bożena; Frueboes, Tomasz; Górski, Maciej; Kazana, Malgorzata; Nawrocki, Krzysztof; Romanowska-Rybinska, Katarzyna; Szleper, Michal; Wrochna, Grzegorz; Zalewski, Piotr; Brona, Grzegorz; Bunkowski, Karol; Cwiok, Mikolaj; Dominik, Wojciech; Doroba, Krzysztof; Kalinowski, Artur; Konecki, Marcin; Krolikowski, Jan; Misiura, Maciej; Wolszczak, Weronika; Bargassa, Pedrame; Beirão Da Cruz E Silva, Cristóvão; Faccioli, Pietro; Ferreira Parracho, Pedro Guilherme; Gallinaro, Michele; Nguyen, Federico; Rodrigues Antunes, Joao; Seixas, Joao; Varela, Joao; Vischia, Pietro; Afanasiev, Serguei; Bunin, Pavel; Golutvin, Igor; Gorbunov, Ilya; Kamenev, Alexey; Karjavin, Vladimir; Konoplyanikov, Viktor; Kozlov, Guennady; Lanev, Alexander; Malakhov, Alexander; Matveev, Viktor; Moisenz, Petr; Palichik, Vladimir; Perelygin, Victor; Shmatov, Sergey; Skatchkov, Nikolai; Smirnov, Vitaly; Zarubin, Anatoli; Golovtsov, Victor; Ivanov, Yury; Kim, Victor; Levchenko, Petr; Murzin, Victor; Oreshkin, Vadim; Smirnov, Igor; Sulimov, Valentin; Uvarov, Lev; Vavilov, Sergey; Vorobyev, Alexey; Vorobyev, Andrey; Andreev, Yuri; Dermenev, Alexander; Gninenko, Sergei; Golubev, Nikolai; Kirsanov, Mikhail; Krasnikov, Nikolai; Pashenkov, Anatoli; Tlisov, Danila; Toropin, Alexander; Epshteyn, Vladimir; Gavrilov, Vladimir; Lychkovskaya, Natalia; Popov, Vladimir; Safronov, Grigory; Semenov, Sergey; Spiridonov, Alexander; Stolin, Viatcheslav; Vlasov, Evgueni; Zhokin, Alexander; Andreev, Vladimir; Azarkin, Maksim; Dremin, Igor; Kirakosyan, Martin; Leonidov, Andrey; Mesyats, Gennady; Rusakov, Sergey V; Vinogradov, Alexey; Belyaev, Andrey; Bogdanova, Galina; Boos, Edouard; Khein, Lev; Klyukhin, Vyacheslav; Kodolova, Olga; Lokhtin, Igor; Lukina, Olga; Obraztsov, Stepan; Petrushanko, Sergey; Proskuryakov, Alexander; Savrin, Viktor; Volkov, Vladimir; Azhgirey, Igor; Bayshev, Igor; Bitioukov, Sergei; Kachanov, Vassili; Kalinin, Alexey; Konstantinov, Dmitri; Krychkine, Victor; Petrov, Vladimir; Ryutin, Roman; Sobol, Andrei; Tourtchanovitch, Leonid; Troshin, Sergey; Tyurin, Nikolay; Uzunian, Andrey; Volkov, Alexey; Adzic, Petar; Dordevic, Milos; Ekmedzic, Marko; Milosevic, Jovan; Aguilar-Benitez, Manuel; Alcaraz Maestre, Juan; Battilana, Carlo; Calvo, Enrique; Cerrada, Marcos; Chamizo Llatas, Maria; Colino, Nicanor; De La Cruz, Begona; Delgado Peris, Antonio; Domínguez Vázquez, Daniel; Fernandez Bedoya, Cristina; Fernández Ramos, Juan Pablo; Ferrando, Antonio; Flix, Jose; Fouz, Maria Cruz; Garcia-Abia, Pablo; Gonzalez Lopez, Oscar; Goy Lopez, Silvia; Hernandez, Jose M; Josa, Maria Isabel; Merino, Gonzalo; Navarro De Martino, Eduardo; Puerta Pelayo, Jesus; Quintario Olmeda, Adrián; Redondo, Ignacio; Romero, Luciano; Senghi Soares, Mara; Willmott, Carlos; Albajar, Carmen; de Trocóniz, Jorge F; Missiroli, Marino; Brun, Hugues; Cuevas, Javier; Fernandez Menendez, Javier; Folgueras, Santiago; Gonzalez Caballero, Isidro; Lloret Iglesias, Lara; Brochero Cifuentes, Javier Andres; Cabrillo, Iban Jose; Calderon, Alicia; Duarte Campderros, Jordi; Fernandez, Marcos; Gomez, Gervasio; Gonzalez Sanchez, Javier; Graziano, Alberto; Lopez Virto, Amparo; Marco, Jesus; Marco, Rafael; Martinez Rivero, Celso; Matorras, Francisco; Munoz Sanchez, Francisca Javiela; Piedra Gomez, Jonatan; Rodrigo, Teresa; Rodríguez-Marrero, Ana Yaiza; Ruiz-Jimeno, Alberto; Scodellaro, Luca; Vila, Ivan; Vilar Cortabitarte, Rocio; Abbaneo, Duccio; Auffray, Etiennette; Auzinger, Georg; Bachtis, Michail; Baillon, Paul; Ball, Austin; Barney, David; Bendavid, Joshua; Benhabib, Lamia; Benitez, Jose F; Bernet, Colin; Bianchi, Giovanni; Bloch, Philippe; Bocci, Andrea; Bonato, Alessio; Bondu, Olivier; Botta, Cristina; Breuker, Horst; Camporesi, Tiziano; Cerminara, Gianluca; Christiansen, Tim; Coarasa Perez, Jose Antonio; Colafranceschi, Stefano; D'Alfonso, Mariarosaria; D'Enterria, David; Dabrowski, Anne; David Tinoco Mendes, Andre; De Guio, Federico; De Roeck, Albert; De Visscher, Simon; Di Guida, Salvatore; Dobson, Marc; Dupont-Sagorin, Niels; Elliott-Peisert, Anna; Eugster, Jürg; Franzoni, Giovanni; Funk, Wolfgang; Giffels, Manuel; Gigi, Dominique; Gill, Karl; Giordano, Domenico; Girone, Maria; Giunta, Marina; Glege, Frank; Gomez-Reino Garrido, Robert; Gowdy, Stephen; Guida, Roberto; Hammer, Josef; Hansen, Magnus; Harris, Philip; Innocente, Vincenzo; Janot, Patrick; Karavakis, Edward; Kousouris, Konstantinos; Krajczar, Krisztian; Lecoq, Paul; Lourenco, Carlos; Magini, Nicolo; Malgeri, Luca; Mannelli, Marcello; Masetti, Lorenzo; Meijers, Frans; Mersi, Stefano; Meschi, Emilio; Moortgat, Filip; Mulders, Martijn; Musella, Pasquale; Orsini, Luciano; Palencia Cortezon, Enrique; Perez, Emmanuelle; Perrozzi, Luca; Petrilli, Achille; Petrucciani, Giovanni; Pfeiffer, Andreas; Pierini, Maurizio; Pimiä, Martti; Piparo, Danilo; Plagge, Michael; Racz, Attila; Reece, William; Rolandi, Gigi; Rovere, Marco; Sakulin, Hannes; Santanastasio, Francesco; Schäfer, Christoph; Schwick, Christoph; Sekmen, Sezen; Sharma, Archana; Siegrist, Patrice; Silva, Pedro; Simon, Michal; Sphicas, Paraskevas; Spiga, Daniele; Steggemann, Jan; Stieger, Benjamin; Stoye, Markus; Tsirou, Andromachi; Veres, Gabor Istvan; Vlimant, Jean-Roch; Wöhri, Hermine Katharina; Zeuner, Wolfram Dietrich; Bertl, Willi; Deiters, Konrad; Erdmann, Wolfram; Horisberger, Roland; Ingram, Quentin; Kaestli, Hans-Christian; König, Stefan; Kotlinski, Danek; Langenegger, Urs; Renker, Dieter; Rohe, Tilman; Bachmair, Felix; Bäni, Lukas; Bianchini, Lorenzo; Bortignon, Pierluigi; Buchmann, Marco-Andrea; Casal, Bruno; Chanon, Nicolas; Deisher, Amanda; Dissertori, Günther; Dittmar, Michael; Donegà, Mauro; Dünser, Marc; Eller, Philipp; Grab, Christoph; Hits, Dmitry; Lustermann, Werner; Mangano, Boris; Marini, Andrea Carlo; Martinez Ruiz del Arbol, Pablo; Meister, Daniel; Mohr, Niklas; Nägeli, Christoph; Nef, Pascal; Nessi-Tedaldi, Francesca; Pandolfi, Francesco; Pape, Luc; Pauss, Felicitas; Peruzzi, Marco; Quittnat, Milena; Ronga, Frederic Jean; Rossini, Marco; Starodumov, Andrei; Takahashi, Maiko; Tauscher, Ludwig; Theofilatos, Konstantinos; Treille, Daniel; Wallny, Rainer; Weber, Hannsjoerg Artur; Amsler, Claude; Chiochia, Vincenzo; De Cosa, Annapaola; Favaro, Carlotta; Hinzmann, Andreas; Hreus, Tomas; Ivova Rikova, Mirena; Kilminster, Benjamin; Millan Mejias, Barbara; Ngadiuba, Jennifer; Robmann, Peter; Snoek, Hella; Taroni, Silvia; Verzetti, Mauro; Yang, Yong; Cardaci, Marco; Chen, Kuan-Hsin; Ferro, Cristina; Kuo, Chia-Ming; Li, Syue-Wei; Lin, Willis; Lu, Yun-Ju; Volpe, Roberta; Yu, Shin-Shan; Bartalini, Paolo; Chang, Paoti; Chang, You-Hao; Chang, Yu-Wei; Chao, Yuan; Chen, Kai-Feng; Chen, Po-Hsun; Dietz, Charles; Grundler, Ulysses; Hou, George Wei-Shu; Hsiung, Yee; Kao, Kai-Yi; Lei, Yeong-Jyi; Liu, Yueh-Feng; Lu, Rong-Shyang; Majumder, Devdatta; Petrakou, Eleni; Shi, Xin; Shiu, Jing-Ge; Tzeng, Yeng-Ming; Wang, Minzu; Wilken, Rachel; Asavapibhop, Burin; Suwonjandee, Narumon; Adiguzel, Aytul; Bakirci, Mustafa Numan; Cerci, Salim; Dozen, Candan; Dumanoglu, Isa; Eskut, Eda; Girgis, Semiray; Gokbulut, Gul; Gurpinar, Emine; Hos, Ilknur; Kangal, Evrim Ersin; Kayis Topaksu, Aysel; Onengut, Gulsen; Ozdemir, Kadri; Ozturk, Sertac; Polatoz, Ayse; Sogut, Kenan; Sunar Cerci, Deniz; Tali, Bayram; Topakli, Huseyin; Vergili, Mehmet; Akin, Ilina Vasileva; Aliev, Takhmasib; Bilin, Bugra; Bilmis, Selcuk; Deniz, Muhammed; Gamsizkan, Halil; Guler, Ali Murat; Karapinar, Guler; Ocalan, Kadir; Ozpineci, Altug; Serin, Meltem; Sever, Ramazan; Surat, Ugur Emrah; Yalvac, Metin; Zeyrek, Mehmet; Gülmez, Erhan; Isildak, Bora; Kaya, Mithat; Kaya, Ozlem; Ozkorucuklu, Suat; Bahtiyar, Hüseyin; Barlas, Esra; Cankocak, Kerem; Günaydin, Yusuf Oguzhan; Vardarlı, Fuat Ilkehan; Yücel, Mete; Levchuk, Leonid; Sorokin, Pavel; Brooke, James John; Clement, Emyr; Cussans, David; Flacher, Henning; Frazier, Robert; Goldstein, Joel; Grimes, Mark; Heath, Greg P; Heath, Helen F; Jacob, Jeson; Kreczko, Lukasz; Lucas, Chris; Meng, Zhaoxia; Newbold, Dave M; Paramesvaran, Sudarshan; Poll, Anthony; Senkin, Sergey; Smith, Vincent J; Williams, Thomas; Bell, Ken W; Belyaev, Alexander; Brew, Christopher; Brown, Robert M; Cockerill, David JA; Coughlan, John A; Harder, Kristian; Harper, Sam; Ilic, Jelena; Olaiya, Emmanuel; Petyt, David; Shepherd-Themistocleous, Claire; Thea, Alessandro; Tomalin, Ian R; Womersley, William John; Worm, Steven; Baber, Mark; Bainbridge, Robert; Buchmuller, Oliver; Burton, Darren; Colling, David; Cripps, Nicholas; Cutajar, Michael; Dauncey, Paul; Davies, Gavin; Della Negra, Michel; Ferguson, William; Fulcher, Jonathan; Futyan, David; Gilbert, Andrew; Guneratne Bryer, Arlo; Hall, Geoffrey; Hatherell, Zoe; Hays, Jonathan; Iles, Gregory; Jarvis, Martyn; Karapostoli, Georgia; Kenzie, Matthew; Lane, Rebecca; Lucas, Robyn; Lyons, Louis; Magnan, Anne-Marie; Marrouche, Jad; Mathias, Bryn; Nandi, Robin; Nash, Jordan; Nikitenko, Alexander; Pela, Joao; Pesaresi, Mark; Petridis, Konstantinos; Pioppi, Michele; Raymond, David Mark; Rogerson, Samuel; Rose, Andrew; Seez, Christopher; Sharp, Peter; Sparrow, Alex; Tapper, Alexander; Vazquez Acosta, Monica; Virdee, Tejinder; Wakefield, Stuart; Wardle, Nicholas; Cole, Joanne; Hobson, Peter R; Khan, Akram; Kyberd, Paul; Leggat, Duncan; Leslie, Dawn; Martin, William; Reid, Ivan; Symonds, Philip; Teodorescu, Liliana; Turner, Mark; Dittmann, Jay; Hatakeyama, Kenichi; Kasmi, Azeddine; Liu, Hongxuan; Scarborough, Tara; Charaf, Otman; Cooper, Seth; Henderson, Conor; Rumerio, Paolo; Avetisyan, Aram; Bose, Tulika; Fantasia, Cory; Heister, Arno; Lawson, Philip; Lazic, Dragoslav; Rohlf, James; Sperka, David; St John, Jason; Sulak, Lawrence; Alimena, Juliette; Bhattacharya, Saptaparna; Christopher, Grant; Cutts, David; Demiragli, Zeynep; Ferapontov, Alexey; Garabedian, Alex; Heintz, Ulrich; Jabeen, Shabnam; Kukartsev, Gennadiy; Laird, Edward; Landsberg, Greg; Luk, Michael; Narain, Meenakshi; Segala, Michael; Sinthuprasith, Tutanon; Speer, Thomas; Swanson, Joshua; Breedon, Richard; Breto, Guillermo; Calderon De La Barca Sanchez, Manuel; Chauhan, Sushil; Chertok, Maxwell; Conway, John; Conway, Rylan; Cox, Peter Timothy; Erbacher, Robin; Gardner, Michael; Ko, Winston; Kopecky, Alexandra; Lander, Richard; Miceli, Tia; Pellett, Dave; Pilot, Justin; Ricci-Tam, Francesca; Rutherford, Britney; Searle, Matthew; Shalhout, Shalhout; Smith, John; Squires, Michael; Tripathi, Mani; Wilbur, Scott; Yohay, Rachel; Andreev, Valeri; Cline, David; Cousins, Robert; Erhan, Samim; Everaerts, Pieter; Farrell, Chris; Felcini, Marta; Hauser, Jay; Ignatenko, Mikhail; Jarvis, Chad; Rakness, Gregory; Schlein, Peter; Takasugi, Eric; Valuev, Vyacheslav; Weber, Matthias; Babb, John; Clare, Robert; Ellison, John Anthony; Gary, J William; Hanson, Gail; Heilman, Jesse; Jandir, Pawandeep; Lacroix, Florent; Liu, Hongliang; Long, Owen Rosser; Luthra, Arun; Malberti, Martina; Nguyen, Harold; Shrinivas, Amithabh; Sturdy, Jared; Sumowidagdo, Suharyo; Wimpenny, Stephen; Andrews, Warren; Branson, James G; Cerati, Giuseppe Benedetto; Cittolin, Sergio; D'Agnolo, Raffaele Tito; Evans, David; Holzner, André; Kelley, Ryan; Kovalskyi, Dmytro; Lebourgeois, Matthew; Letts, James; Macneill, Ian; Padhi, Sanjay; Palmer, Christopher; Pieri, Marco; Sani, Matteo; Sharma, Vivek; Simon, Sean; Sudano, Elizabeth; Tadel, Matevz; Tu, Yanjun; Vartak, Adish; Wasserbaech, Steven; Würthwein, Frank; Yagil, Avraham; Yoo, Jaehyeok; Barge, Derek; Campagnari, Claudio; Danielson, Thomas; Flowers, Kristen; Geffert, Paul; George, Christopher; Golf, Frank; Incandela, Joe; Justus, Christopher; Magaña Villalba, Ricardo; Mccoll, Nickolas; Pavlunin, Viktor; Richman, Jeffrey; Rossin, Roberto; Stuart, David; To, Wing; West, Christopher; Apresyan, Artur; Bornheim, Adolf; Bunn, Julian; Chen, Yi; Di Marco, Emanuele; Duarte, Javier; Kcira, Dorian; Mott, Alexander; Newman, Harvey B; Pena, Cristian; Rogan, Christopher; Spiropulu, Maria; Timciuc, Vladlen; Wilkinson, Richard; Xie, Si; Zhu, Ren-Yuan; Azzolini, Virginia; Calamba, Aristotle; Carroll, Ryan; Ferguson, Thomas; Iiyama, Yutaro; Jang, Dong Wook; Paulini, Manfred; Russ, James; Vogel, Helmut; Vorobiev, Igor; Cumalat, John Perry; Drell, Brian Robert; Ford, William T; Gaz, Alessandro; Luiggi Lopez, Eduardo; Nauenberg, Uriel; Smith, James; Stenson, Kevin; Ulmer, Keith; Wagner, Stephen Robert; Alexander, James; Chatterjee, Avishek; Eggert, Nicholas; Gibbons, Lawrence Kent; Hopkins, Walter; Khukhunaishvili, Aleko; Kreis, Benjamin; Mirman, Nathan; Nicolas Kaufman, Gala; Patterson, Juliet Ritchie; Ryd, Anders; Salvati, Emmanuele; Sun, Werner; Teo, Wee Don; Thom, Julia; Thompson, Joshua; Tucker, Jordan; Weng, Yao; Winstrom, Lucas; Wittich, Peter; Winn, Dave; Abdullin, Salavat; Albrow, Michael; Anderson, Jacob; Apollinari, Giorgio; Bauerdick, Lothar AT; Beretvas, Andrew; Berryhill, Jeffrey; Bhat, Pushpalatha C; Burkett, Kevin; Butler, Joel Nathan; Chetluru, Vasundhara; Cheung, Harry; Chlebana, Frank; Cihangir, Selcuk; Elvira, Victor Daniel; Fisk, Ian; Freeman, Jim; Gao, Yanyan; Gottschalk, Erik; Gray, Lindsey; Green, Dan; Grünendahl, Stefan; Gutsche, Oliver; Hare, Daryl; Harris, Robert M; Hirschauer, James; Hooberman, Benjamin; Jindariani, Sergo; Johnson, Marvin; Joshi, Umesh; Kaadze, Ketino; Klima, Boaz; Kwan, Simon; Linacre, Jacob; Lincoln, Don; Lipton, Ron; Lykken, Joseph; Maeshima, Kaori; Marraffino, John Michael; Martinez Outschoorn, Verena Ingrid; Maruyama, Sho; Mason, David; McBride, Patricia; Mishra, Kalanand; Mrenna, Stephen; Musienko, Yuri; Nahn, Steve; Newman-Holmes, Catherine; O'Dell, Vivian; Prokofyev, Oleg; Ratnikova, Natalia; Sexton-Kennedy, Elizabeth; Sharma, Seema; Spalding, William J; Spiegel, Leonard; Taylor, Lucas; Tkaczyk, Slawek; Tran, Nhan Viet; Uplegger, Lorenzo; Vaandering, Eric Wayne; Vidal, Richard; Whitbeck, Andrew; Whitmore, Juliana; Wu, Weimin; Yang, Fan; Yun, Jae Chul; Acosta, Darin; Avery, Paul; Bourilkov, Dimitri; Cheng, Tongguang; Das, Souvik; De Gruttola, Michele; Di Giovanni, Gian Piero; Dobur, Didar; Field, Richard D; Fisher, Matthew; Fu, Yu; Furic, Ivan-Kresimir; Hugon, Justin; Kim, Bockjoo; Konigsberg, Jacobo; Korytov, Andrey; Kropivnitskaya, Anna; Kypreos, Theodore; Low, Jia Fu; Matchev, Konstantin; Milenovic, Predrag; Mitselmakher, Guenakh; Muniz, Lana; Rinkevicius, Aurelijus; Shchutska, Lesya; Skhirtladze, Nikoloz; Snowball, Matthew; Yelton, John; Zakaria, Mohammed; Gaultney, Vanessa; Hewamanage, Samantha; Linn, Stephan; Markowitz, Pete; Martinez, German; Rodriguez, Jorge Luis; Adams, Todd; Askew, Andrew; Bochenek, Joseph; Chen, Jie; Diamond, Brendan; Haas, Jeff; Hagopian, Sharon; Hagopian, Vasken; Johnson, Kurtis F; Prosper, Harrison; Veeraraghavan, Venkatesh; Weinberg, Marc; Baarmand, Marc M; Dorney, Brian; Hohlmann, Marcus; Kalakhety, Himali; Yumiceva, Francisco; Adams, Mark Raymond; Apanasevich, Leonard; Bazterra, Victor Eduardo; Betts, Russell Richard; Bucinskaite, Inga; Cavanaugh, Richard; Evdokimov, Olga; Gauthier, Lucie; Gerber, Cecilia Elena; Hofman, David Jonathan; Khalatyan, Samvel; Kurt, Pelin; Moon, Dong Ho; O'Brien, Christine; Silkworth, Christopher; Turner, Paul; Varelas, Nikos; Akgun, Ugur; Albayrak, Elif Asli; Bilki, Burak; Clarida, Warren; Dilsiz, Kamuran; Duru, Firdevs; Haytmyradov, Maksat; Merlo, Jean-Pierre; Mermerkaya, Hamit; Mestvirishvili, Alexi; Moeller, Anthony; Nachtman, Jane; Ogul, Hasan; Onel, Yasar; Ozok, Ferhat; Sen, Sercan; Tan, Ping; Tiras, Emrah; Wetzel, James; Yetkin, Taylan; Yi, Kai; Barnett, Bruce Arnold; Blumenfeld, Barry; Bolognesi, Sara; Fehling, David; Gritsan, Andrei; Maksimovic, Petar; Martin, Christopher; Swartz, Morris; Baringer, Philip; Bean, Alice; Benelli, Gabriele; Kenny III, Raymond Patrick; Murray, Michael; Noonan, Daniel; Sanders, Stephen; Sekaric, Jadranka; Stringer, Robert; Wang, Quan; Wood, Jeffrey Scott; Barfuss, Anne-Fleur; Chakaberia, Irakli; Ivanov, Andrew; Khalil, Sadia; Makouski, Mikhail; Maravin, Yurii; Saini, Lovedeep Kaur; Shrestha, Shruti; Svintradze, Irakli; Gronberg, Jeffrey; Lange, David; Rebassoo, Finn; Wright, Douglas; Baden, Drew; Calvert, Brian; Eno, Sarah Catherine; Gomez, Jaime; Hadley, Nicholas John; Kellogg, Richard G; Kolberg, Ted; Lu, Ying; Marionneau, Matthieu; Mignerey, Alice; Pedro, Kevin; Skuja, Andris; Temple, Jeffrey; Tonjes, Marguerite; Tonwar, Suresh C; Apyan, Aram; Barbieri, Richard; Bauer, Gerry; Busza, Wit; Cali, Ivan Amos; Chan, Matthew; Di Matteo, Leonardo; Dutta, Valentina; Gomez Ceballos, Guillelmo; Goncharov, Maxim; Gulhan, Doga; Klute, Markus; Lai, Yue Shi; Lee, Yen-Jie; Levin, Andrew; Luckey, Paul David; Ma, Teng; Paus, Christoph; Ralph, Duncan; Roland, Christof; Roland, Gunther; Stephans, George; Stöckli, Fabian; Sumorok, Konstanty; Velicanu, Dragos; Veverka, Jan; Wyslouch, Bolek; Yang, Mingming; Yoon, Sungho; Zanetti, Marco; Zhukova, Victoria; Dahmes, Bryan; De Benedetti, Abraham; Gude, Alexander; Kao, Shih-Chuan; Klapoetke, Kevin; Kubota, Yuichi; Mans, Jeremy; Pastika, Nathaniel; Rusack, Roger; Singovsky, Alexander; Tambe, Norbert; Turkewitz, Jared; Acosta, John Gabriel; Cremaldi, Lucien Marcus; Kroeger, Rob; Oliveros, Sandra; Perera, Lalith; Rahmat, Rahmat; Sanders, David A; Summers, Don; Avdeeva, Ekaterina; Bloom, Kenneth; Bose, Suvadeep; Claes, Daniel R; Dominguez, Aaron; Gonzalez Suarez, Rebeca; Keller, Jason; Knowlton, Dan; Kravchenko, Ilya; Lazo-Flores, Jose; Malik, Sudhir; Meier, Frank; Snow, Gregory R; Dolen, James; Godshalk, Andrew; Iashvili, Ia; Jain, Supriya; Kharchilava, Avto; Kumar, Ashish; Rappoccio, Salvatore; Alverson, George; Barberis, Emanuela; Baumgartel, Darin; Chasco, Matthew; Haley, Joseph; Massironi, Andrea; Nash, David; Orimoto, Toyoko; Trocino, Daniele; Wood, Darien; Zhang, Jinzhong; Anastassov, Anton; Hahn, Kristan Allan; Kubik, Andrew; Lusito, Letizia; Mucia, Nicholas; Odell, Nathaniel; Pollack, Brian; Pozdnyakov, Andrey; Schmitt, Michael Henry; Stoynev, Stoyan; Sung, Kevin; Velasco, Mayda; Won, Steven; Berry, Douglas; Brinkerhoff, Andrew; Chan, Kwok Ming; Drozdetskiy, Alexey; Hildreth, Michael; Jessop, Colin; Karmgard, Daniel John; Kellams, Nathan; Kolb, Jeff; Lannon, Kevin; Luo, Wuming; Lynch, Sean; Marinelli, Nancy; Morse, David Michael; Pearson, Tessa; Planer, Michael; Ruchti, Randy; Slaunwhite, Jason; Valls, Nil; Wayne, Mitchell; Wolf, Matthias; Woodard, Anna; Antonelli, Louis; Bylsma, Ben; Durkin, Lloyd Stanley; Flowers, Sean; Hill, Christopher; Hughes, Richard; Kotov, Khristian; Ling, Ta-Yung; Puigh, Darren; Rodenburg, Marissa; Smith, Geoffrey; Vuosalo, Carl; Winer, Brian L; Wolfe, Homer; Wulsin, Howard Wells; Berry, Edmund; Elmer, Peter; Halyo, Valerie; Hebda, Philip; Hegeman, Jeroen; Hunt, Adam; Jindal, Pratima; Koay, Sue Ann; Lujan, Paul; Marlow, Daniel; Medvedeva, Tatiana; Mooney, Michael; Olsen, James; Piroué, Pierre; Quan, Xiaohang; Raval, Amita; Saka, Halil; Stickland, David; Tully, Christopher; Werner, Jeremy Scott; Zenz, Seth Conrad; Zuranski, Andrzej; Brownson, Eric; Lopez, Angel; Mendez, Hector; Ramirez Vargas, Juan Eduardo; Alagoz, Enver; Benedetti, Daniele; Bolla, Gino; Bortoletto, Daniela; De Mattia, Marco; Everett, Adam; Hu, Zhen; Jha, Manoj; Jones, Matthew; Jung, Kurt; Kress, Matthew; Leonardo, Nuno; Lopes Pegna, David; Maroussov, Vassili; Merkel, Petra; Miller, David Harry; Neumeister, Norbert; Radburn-Smith, Benjamin Charles; Shipsey, Ian; Silvers, David; Svyatkovskiy, Alexey; Wang, Fuqiang; Xie, Wei; Xu, Lingshan; Yoo, Hwi Dong; Zablocki, Jakub; Zheng, Yu; Parashar, Neeti; Adair, Antony; Akgun, Bora; Ecklund, Karl Matthew; Geurts, Frank JM; Li, Wei; Michlin, Benjamin; Padley, Brian Paul; Redjimi, Radia; Roberts, Jay; Zabel, James; Betchart, Burton; Bodek, Arie; Covarelli, Roberto; de Barbaro, Pawel; Demina, Regina; Eshaq, Yossof; Ferbel, Thomas; Garcia-Bellido, Aran; Goldenzweig, Pablo; Han, Jiyeon; Harel, Amnon; Miner, Daniel Carl; Petrillo, Gianluca; Vishnevskiy, Dmitry; Zielinski, Marek; Bhatti, Anwar; Ciesielski, Robert; Demortier, Luc; Goulianos, Konstantin; Lungu, Gheorghe; Malik, Sarah; Mesropian, Christina; Arora, Sanjay; Barker, Anthony; Chou, John Paul; Contreras-Campana, Christian; Contreras-Campana, Emmanuel; Duggan, Daniel; Ferencek, Dinko; Gershtein, Yuri; Gray, Richard; Halkiadakis, Eva; Hidas, Dean; Lath, Amitabh; Panwalkar, Shruti; Park, Michael; Patel, Rishi; Rekovic, Vladimir; Robles, Jorge; Salur, Sevil; Schnetzer, Steve; Seitz, Claudia; Somalwar, Sunil; Stone, Robert; Thomas, Scott; Thomassen, Peter; Walker, Matthew; Rose, Keith; Spanier, Stefan; Yang, Zong-Chang; York, Andrew; Bouhali, Othmane; Eusebi, Ricardo; Flanagan, Will; Gilmore, Jason; Kamon, Teruki; Khotilovich, Vadim; Krutelyov, Vyacheslav; Montalvo, Roy; Osipenkov, Ilya; Pakhotin, Yuriy; Perloff, Alexx; Roe, Jeffrey; Safonov, Alexei; Sakuma, Tai; Suarez, Indara; Tatarinov, Aysen; Toback, David; Akchurin, Nural; Cowden, Christopher; Damgov, Jordan; Dragoiu, Cosmin; Dudero, Phillip Russell; Faulkner, James; Kovitanggoon, Kittikul; Kunori, Shuichi; Lee, Sung Won; Libeiro, Terence; Volobouev, Igor; Appelt, Eric; Delannoy, Andrés G; Greene, Senta; Gurrola, Alfredo; Johns, Willard; Maguire, Charles; Mao, Yaxian; Melo, Andrew; Sharma, Monika; Sheldon, Paul; Snook, Benjamin; Tuo, Shengquan; Velkovska, Julia; Arenton, Michael Wayne; Boutle, Sarah; Cox, Bradley; Francis, Brian; Goodell, Joseph; Hirosky, Robert; Ledovskoy, Alexander; Lin, Chuanzhe; Neu, Christopher; Wood, John; Gollapinni, Sowjanya; Harr, Robert; Karchin, Paul Edmund; Kottachchi Kankanamge Don, Chamath; Lamichhane, Pramod; Belknap, Donald; Borrello, Laura; Carlsmith, Duncan; Cepeda, Maria; Dasu, Sridhara; Duric, Senka; Friis, Evan; Grothe, Monika; Hall-Wilton, Richard; Herndon, Matthew; Hervé, Alain; Klabbers, Pamela; Klukas, Jeffrey; Lanaro, Armando; Levine, Aaron; Loveless, Richard; Mohapatra, Ajit; Ojalvo, Isabel; Perry, Thomas; Pierro, Giuseppe Antonio; Polese, Giovanni; Ross, Ian; Sakharov, Alexandre; Sarangi, Tapas; Savin, Alexander; Smith, Wesley H; Antchev, G.; Aspell, P.; Atanassov, I.; Avati, V.; Baechler, J.; Berardi, V.; Berretti, M.; Bossini, E.; Bottigli, U.; Bozzo, M.; Brucken, E.; Buzzo, A.; Cafagna, F.S.; Catanesi, M.G.; Covault, C.; Csanad, M.; Csorgo, T.; Deile, M.; Doubek, M.; Eggert, K.; Eremin, V.; Fiergolski, A.; Garcia, F.; Georgiev, V.; Giani, S.; Grzanka, L.; Hammerbauer, J.; Heino, J.; Hilden, T.; Karev, A.; Kaspar, J.; Kopal, J.; Kosinski, J.; Kundrat, V.; Lami, S.; Latino, G.; Lauhakangas, R.; Leszko, T.; Lippmaa, E.; Lippmaa, J.; Lokajicek, M.V.; Losurdo, L.; Lucas Rodriguez, F.; Macri, M.; Maki, T.; Mercadante, A.; Minafra, N.; Minutoli, S.; Nemes, F.; Niewiadomski, H.; Oliveri, E.; Oljemark, F.; Orava, R.; Oriunnof, M.; Osterberg, K.; Palazzi, P.; Peroutka, Z.; Prochazka, J.; Quinto, M.; Radermacher, E.; Radicioni, E.; Ravotti, F.; Ropelewski, L.; Ruggiero, G.; Saarikko, H.; Scribano, A.; Smajek, J.; Snoeys, W.; Sziklai, J.; Taylor, C.; Turini, N.; Vacek, V.; Welti, J.; Whitmoreh, J.; Wyszkowski, P.; Zielinski, K.

    2014-10-29

    Pseudorapidity ($\\eta$) distributions of charged particles produced in proton-proton collisions at a centre-of-mass energy of 8 TeV are measured in the ranges abs($\\eta$) < 2.2 and 5.3 < abs($\\eta$) < 6.4 covered by the CMS and TOTEM detectors, respectively. The data correspond to an integrated luminosity of 45 inverse microbarns. Measurements are presented for three event categories. The most inclusive category is sensitive to 91-96% of the total inelastic proton-proton cross section. The other two categories are disjoint subsets of the inclusive sample that are either enhanced or depleted in single diffractive dissociation events. The data are compared to models used to describe high-energy hadronic interactions. None of the models considered provide a consistent description of the measured distributions.

  18. Proceedings of workshop on distributed computing and network

    International Nuclear Information System (INIS)

    Abe, F.; Yuasa, F.

    1993-02-01

    'Distributed Computing and Network' is one of hot topics in the field of computing. Recent progress in the computer technology is providing new paradigm for computing even in High Energy Physics. Particularly the workstation based computer system is opening new active field of computer application to sciences. The major topics discussed in this symposium are distributed computing and wide area research network for domestic and international link. The two days symposium provided so enough topics to foresee the next direction of our computing environment. 70 people have got together to discuss on these interesting thema as well as information exchange on the computer technologies. (J.P.N.)

  19. DISTRIBUTED COMPUTING SUPPORT CONTRACT USER SURVEY

    CERN Multimedia

    2001-01-01

    IT Division operates a Distributed Computing Support Service, which offers support to owners and users of all variety of desktops throughout CERN as well as more dedicated services for certain groups, divisions and experiments. It also provides the staff who operate the central and satellite Computing Helpdesks, it supports printers throughout the site and it provides the installation activities of the IT Division PC Service. We have published a questionnaire which seeks to gather your feedback on how the services are seen, how they are progressing and how they can be improved. Please take a few minutes to fill in this questionnaire. Replies will be treated in confidence if desired although you may also request an opportunity to be contacted by CERN's service management directly. Please tell us if you met problems but also if you had a successful conclusion to your request for assistance. You will find the questionnaire at the web site http://wwwinfo/support/survey/desktop-contract There will also be a link ...

  20. DISTRIBUTED COMPUTING SUPPORT SERVICE USER SURVEY

    CERN Multimedia

    2001-01-01

    IT Division operates a Distributed Computing Support Service, which offers support to owners and users of all variety of desktops throughout CERN as well as more dedicated services for certain groups, divisions and experiments. It also provides the staff who operate the central and satellite Computing Helpdesks, it supports printers throughout the site and it provides the installation activities of the IT Division PC Service. We have published a questionnaire, which seeks to gather your feedback on how the services are seen, how they are progressing and how they can be improved. Please take a few minutes to fill in this questionnaire. Replies will be treated in confidence if desired although you may also request an opportunity to be contacted by CERN's service management directly. Please tell us if you met problems but also if you had a successful conclusion to your request for assistance. You will find the questionnaire at the web site http://wwwinfo/support/survey/desktop-contract There will also be a link...

  1. Distributed computing for FTU data handling

    Energy Technology Data Exchange (ETDEWEB)

    Bertocchi, A. E-mail: bertocchi@frascati.enea.it; Bracco, G.; Buceti, G.; Centioli, C.; Giovannozzi, E.; Iannone, F.; Panella, M.; Vitale, V

    2002-06-01

    The growth of data warehouse in tokamak experiment is leading fusion laboratories to provide new IT solutions in data handling. In the last three years, the Frascati Tokamak Upgrade (FTU) experimental database was migrated from IBM-mainframe to Unix distributed computing environment. The migration efforts have taken into account the following items: (1) a new data storage solution based on storage area network over fibre channel; (2) andrew file system (AFS) for wide area network file sharing; (3) 'one measure/one file' philosophy replacing 'one shot/one file' to provide a faster read/write data access; (4) more powerful services, such as AFS, CORBA and MDSplus to allow users to access FTU database from different clients, regardless their O.S.; (5) large availability of data analysis tools, from the locally developed utility SHOW to the multi-platform Matlab, interactive data language and jScope (all these tools are now able to access also the Joint European Torus data, in the framework of the remote data access activity); (6) a batch-computing cluster of Alpha/CompaqTru64 CPU based on CODINE/GRD to optimize utilization of software and hardware resources.

  2. Automating usability of ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Tupputi, S A; Girolamo, A Di; Kouba, T; Schovancová, J

    2014-01-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  3. Distributing CMS Data between the Florida T2 and T3 Centers using Lustre and Xrootd-fs

    International Nuclear Information System (INIS)

    Kaganas, G; Rodriguez, J L; Cheng, M; Avery, P; Bourilkov, D; Fu, Y; Palencia, J

    2014-01-01

    We have developed remote data access for large volumes of data over the Wide Area Network based on the Lustre filesystem and Kerberos authentication for security. In this paper we explore a prototype for two-step data access from worker nodes at Florida Tier3 centers, located behind a firewall and using a private network, to data hosted on the Lustre filesystem at the University of Florida CMS Tier2 center. At the Tier3 center we use a client which mounts securely the Lustre filesystem and hosts an XrootD server. The worker nodes access the data from the Tier3 client using POSIX compliant tools via the XrootD-fs filesystem. We perform scalability tests with up to 200 jobs running in parallel on the Tier3 worker nodes.

  4. An Overview of Cloud Computing in Distributed Systems

    Science.gov (United States)

    Divakarla, Usha; Kumari, Geetha

    2010-11-01

    Cloud computing is the emerging trend in the field of distributed computing. Cloud computing evolved from grid computing and distributed computing. Cloud plays an important role in huge organizations in maintaining huge data with limited resources. Cloud also helps in resource sharing through some specific virtual machines provided by the cloud service provider. This paper gives an overview of the cloud organization and some of the basic security issues pertaining to the cloud.

  5. Distributed Memory Parallel Computing with SEAWAT

    Science.gov (United States)

    Verkaik, J.; Huizer, S.; van Engelen, J.; Oude Essink, G.; Ram, R.; Vuik, K.

    2017-12-01

    Fresh groundwater reserves in coastal aquifers are threatened by sea-level rise, extreme weather conditions, increasing urbanization and associated groundwater extraction rates. To counteract these threats, accurate high-resolution numerical models are required to optimize the management of these precious reserves. The major model drawbacks are long run times and large memory requirements, limiting the predictive power of these models. Distributed memory parallel computing is an efficient technique for reducing run times and memory requirements, where the problem is divided over multiple processor cores. A new Parallel Krylov Solver (PKS) for SEAWAT is presented. PKS has recently been applied to MODFLOW and includes Conjugate Gradient (CG) and Biconjugate Gradient Stabilized (BiCGSTAB) linear accelerators. Both accelerators are preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using Recursive Coordinate Bisection (RCB) load balancing, b) each subdomain uses local memory only and communicates with other subdomains by Message Passing Interface (MPI) within the linear accelerator, c) it is fully integrated in SEAWAT. Within SEAWAT, the PKS-CG solver replaces the Preconditioned Conjugate Gradient (PCG) solver for solving the variable-density groundwater flow equation and the PKS-BiCGSTAB solver replaces the Generalized Conjugate Gradient (GCG) solver for solving the advection-diffusion equation. PKS supports the third-order Total Variation Diminishing (TVD) scheme for computing advection. Benchmarks were performed on the Dutch national supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 128 cores, for a synthetic 3D Henry model (100 million cells) and the real-life Sand Engine model ( 10 million cells). The Sand Engine model was used to investigate the potential effect of the long-term morphological evolution of a large sand replenishment and climate change on fresh groundwater resources

  6. Quantum Internet: from Communication to Distributed Computing!

    OpenAIRE

    Caleffi, Marcello; Cacciapuoti, Angela Sara; Bianchi, Giuseppe

    2018-01-01

    In this invited paper, the authors discuss the exponential computing speed-up achievable by interconnecting quantum computers through a quantum internet. They also identify key future research challenges and open problems for quantum internet design and deployment.

  7. Using ssh as portal - The CMS CRAB over glideinWMS experience

    CERN Document Server

    Belforte, Stefano; Letts, James; Fanzago, Federica; Saiz Santos, Maria Dolores; Martin, Terrence

    2013-01-01

    The User Analysis of the CMS experiment is performed in distributed way usingboth Grid and dedicated resources. In order to insulate the users from the details of computing fabric, CMS relies on the CRAB (CMS Remote Analysis Builder) package as an abstraction layer. CMS has recently switched from a client-server version of CRAB to a purely client-based solution, with ssh being used to interface with HTCondor-based glideinWMS batch system. This switch has resulted in significant improvement of user satisfaction, as well as in significant simplification of the CRAB code base and of the operation support. This paper presents the architecture of the ssh-based CRAB package, the rationale behind it, as well as the operational experience running both the client-server and the ssh-based versions in parallel forseveral months.

  8. Earth observation scientific workflows in a distributed computing environment

    CSIR Research Space (South Africa)

    Van Zyl, TL

    2011-09-01

    Full Text Available capabilities has focused on the web services approach as exemplified by the OGC's Web Processing Service and by GRID computing. The approach to leveraging distributed computing resources described in this paper uses instead remote objects via RPy...

  9. Prototyping and Simulating Parallel, Distributed Computations with VISA

    National Research Council Canada - National Science Library

    Demeure, Isabelle M; Nutt, Gary J

    1989-01-01

    ...] to support the design, prototyping, and simulation of parallel, distributed computations. In particular, VISA is meant to guide the choice of partitioning and communication strategies for such computations, based on their performance...

  10. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    OpenAIRE

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and s...

  11. Distributed Computations Environment Protection Using Artificial Immune Systems

    Directory of Open Access Journals (Sweden)

    A. V. Moiseev

    2011-12-01

    Full Text Available In this article the authors describe possibility of artificial immune systems applying for distributed computations environment protection from definite types of malicious impacts.

  12. A Distributed Computational Infrastructure for Science and Education

    Directory of Open Access Journals (Sweden)

    Rustam K. Bazarov

    2014-06-01

    Full Text Available Researchers have lately been paying increasingly more attention to parallel and distributed algorithms for solving high-dimensionality problems. In this regard, the issue of acquiring or renting computational resources becomes a topical one for employees of scientific and educational institutions. This article examines technology and methods for organizing a distributed computational infrastructure. The author addresses the experience of creating a high-performance system powered by existing clusterization and grid computing technology. The approach examined in the article helps minimize financial costs, aggregate territorially distributed computational resources and ensures a more rational use of available computer equipment, eliminating its downtimes.

  13. modeling workflow management in a distributed computing system

    African Journals Online (AJOL)

    Dr Obe

    communication system, which allows for computerized support. ... Keywords: Distributed computing system; Petri nets;Workflow management. 1. ... A distributed operating system usually .... the questionnaire is returned with invalid data,.

  14. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00291854; The ATLAS collaboration; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computin...

  15. Distributed computing environments for future space control systems

    Science.gov (United States)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  16. Predicting dataset popularity for the CMS experiment

    CERN Document Server

    INSPIRE-00005122; Li, Ting; Giommi, Luca; Bonacorsi, Daniele; Wildish, Tony

    2016-01-01

    The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and making discoveries. Even though computing plays a significant role in physics analysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site utilization can be really useful for improving the throughput of the system and its management. In this paper, we discuss a first CMS analysis of dataset popularity based on CMS meta-data which can be used as a model for dynamic data placement and provide the foundation of data-driven approach for the CMS computing infrastructure.

  17. Predicting dataset popularity for the CMS experiment

    International Nuclear Information System (INIS)

    Kuznetsov, V.; Li, T.; Giommi, L.; Bonacorsi, D.; Wildish, T.

    2016-01-01

    The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and making discoveries. Even though computing plays a significant role in physics analysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site utilization can be really useful for improving the throughput of the system and its management. In this paper, we discuss a first CMS analysis of dataset popularity based on CMS meta-data which can be used as a model for dynamic data placement and provide the foundation of data-driven approach for the CMS computing infrastructure. (paper)

  18. Distributed storage and cloud computing: a test case

    International Nuclear Information System (INIS)

    Piano, S; Ricca, G Delia

    2014-01-01

    Since 2003 the computing farm hosted by the INFN Tier3 facility in Trieste supports the activities of many scientific communities. Hundreds of jobs from 45 different VOs, including those of the LHC experiments, are processed simultaneously. Given that normally the requirements of the different computational communities are not synchronized, the probability that at any given time the resources owned by one of the participants are not fully utilized is quite high. A balanced compensation should in principle allocate the free resources to other users, but there are limits to this mechanism. In fact, the Trieste site may not hold the amount of data needed to attract enough analysis jobs, and even in that case there could be a lack of bandwidth for their access. The Trieste ALICE and CMS computing groups, in collaboration with other Italian groups, aim to overcome the limitations of existing solutions using two approaches: sharing the data among all the participants taking full advantage of GARR-X wide area networks (10 GB/s) and integrating the resources dedicated to batch analysis with the ones reserved for dynamic interactive analysis, through modern solutions as cloud computing.

  19. Modeling Workflow Management in a Distributed Computing System ...

    African Journals Online (AJOL)

    Distributed computing is becoming increasingly important in our daily life. This is because it enables the people who use it to share information more rapidly and increases their productivity. A major characteristic feature or distributed computing is the explicit representation of process logic within a communication system, ...

  20. Programming Languages for Distributed Computing Systems

    NARCIS (Netherlands)

    Bal, H.E.; Steiner, J.G.; Tanenbaum, A.S.

    1989-01-01

    When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less

  1. Building mail server on distributed computing system

    International Nuclear Information System (INIS)

    Akihiro Shibata; Osamu Hamada; Tomoko Oshikubo; Takashi Sasaki

    2001-01-01

    The electronic mail has become the indispensable function in daily job, and the server stability and performance are required. Using DCE and DFS we have built the distributed electronic mail sever, that is, servers such as SMTP, IMAP are distributed symmetrically, and provides the seamless access

  2. CMS brochure (English version)

    CERN Document Server

    Marcastel, Fabienne

    2014-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which has started up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.

  3. CMS Program Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Office of Enterprise Data and Analytics has developed CMS Program Statistics, which includes detailed summary statistics on national health care, Medicare...

  4. CMS Drug Spending

    Data.gov (United States)

    U.S. Department of Health & Human Services — CMS has released several information products that provide spending information for prescription drugs in the Medicare and Medicaid programs. The CMS Drug Spending...

  5. CMS Brochure (german version)

    CERN Multimedia

    Marcastel, F

    2007-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.

  6. CMS brochure (Spanish version)

    CERN Multimedia

    Lefevre, C

    2008-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.

  7. CMS Records Schedule

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Records Schedule provides disposition authorizations approved by the National Archives and Records Administration (NARA) for CMS program-related records...

  8. LHCb Distributed Data Analysis on the Computing Grid

    CERN Document Server

    Paterson, S; Parkes, C

    2006-01-01

    LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid.

  9. The Principals and Practice of Distributed High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...

  10. CMS-Wave

    Science.gov (United States)

    2015-10-30

    Coastal Inlets Research Program CMS -Wave CMS -Wave is a two-dimensional spectral wind-wave generation and transformation model that employs a forward...marching, finite-difference method to solve the wave action conservation equation. Capabilities of CMS -Wave include wave shoaling, refraction... CMS -Wave can be used in either on a half- or full-plane mode, with primary waves propagating from the seaward boundary toward shore. It can

  11. From parallel to distributed computing for reactive scattering calculations

    International Nuclear Information System (INIS)

    Lagana, A.; Gervasi, O.; Baraglia, R.

    1994-01-01

    Some reactive scattering codes have been ported on different innovative computer architectures ranging from massively parallel machines to clustered workstations. The porting has required a drastic restructuring of the codes to single out computationally decoupled cpu intensive subsections. The suitability of different theoretical approaches for parallel and distributed computing restructuring is discussed and the efficiency of related algorithms evaluated

  12. A Weibull distribution accrual failure detector for cloud computing.

    Science.gov (United States)

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.

  13. CMS Central Hadron Calorimeter

    OpenAIRE

    Budd, Howard S.

    2001-01-01

    We present a description of the CMS central hadron calorimeter. We describe the production of the 1996 CMS hadron testbeam module. We show the results of the quality control tests of the testbeam module. We present some results of the 1995 CMS hadron testbeam.

  14. CMS Comic Book Brochure

    CERN Document Server

    2006-01-01

    To raise students' awareness of what the CMS detector is, how it was constructed and what it hopes to find. Titled "CMS Particle Hunter," this colorful comic book style brochure explains to young budding scientists and science enthusiasts in colorful animation how the CMS detector was made, its main parts, and what scientists hope to find using this complex tool.

  15. Distributed computing environment monitoring and user expectations

    International Nuclear Information System (INIS)

    Cottrell, R.L.A.; Logg, C.A.

    1996-01-01

    This paper discusses the growing needs for distributed system monitoring and compares it to current practices. It then goes to identify the components of distributed system monitoring and shows how they are implemented and successfully used at one site today to address the Local area Network (WAN), and host monitoring. It shows how this monitoring can be used to develop realistic service level expectations and also identifies the costs. Finally, the paper briefly discusses the future challenges in network monitoring. (author)

  16. Rivet usage at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Radziej, Markus; Hebbeker, Thomas; Sonnenschein, Lars [III. Phys. Inst. A, RWTH Aachen (Germany)

    2015-07-01

    In this talk an overview of Rivet and its usage at the CMS experiment is presented. Rivet stands for ''Robust Independent Validation of Experiment and Theory'' and is used for optimizing and validating Monte Carlo event generators. By using the results of published analyses, distributions of the simulation can be compared to experimental measurements (corrected for detector effects). This gives insight into the agreement on the particle-level. Starting off with an introduction to the Rivet environment, the purpose of this tool in modern particle physics is explained. Before taking a closer look at the analysis structure, the software necessary to get comparisons is outlined. Analysis implementations are discussed using code examples, showcasing the powerful framework that Rivet provides. A few selected final distributions displaying both Monte Carlo generated events and recorded data are presented, showing the potential to perform particle-level comparisons.

  17. CMS tracker observes muons

    CERN Multimedia

    2006-01-01

    A computer image of a cosmic ray traversing the many layers of the TEC+ silicon sensors. The first cosmic muon tracks have been observed in one of the CMS tracker endcaps. On 14 March, a sector on one of the two large tracker endcaps underwent a cosmic muon run. Since then, thousands of tracks have been recorded. These data will be used not only to study the tracking, but also to exercise various track alignment algorithms The endcap tested, called the TEC+, is under construction at RWTH Aachen in Germany. The endcaps have a modular design, with silicon strip modules mounted onto wedge-shaped carbon fibre support plates, so-called petals. Up to 28 modules are arranged in radial rings on both sides of these plates. One eighth of an endcap is populated with 18 petals and called a sector. The next major step is a test of the first sector at CMS operating conditions, with the silicon modules at a temperature below -10°C. Afterwards, the remaining seven sectors have to be integrated. In autumn 2006, TEC+ wil...

  18. Computer program for source distribution process in radiation facility

    International Nuclear Information System (INIS)

    Al-Kassiri, H.; Abdul Ghani, B.

    2007-08-01

    Computer simulation for dose distribution using Visual Basic has been done according to the arrangement and activities of Co-60 sources. This program provides dose distribution in treated products depending on the product density and desired dose. The program is useful for optimization of sources distribution during loading process. there is good agreement between calculated data for the program and experimental data.(Author)

  19. A lightweight communication library for distributed computing

    NARCIS (Netherlands)

    Groen, D.; Rieder, S.; Grosso, P.; de Laat, C.; Portegies Zwart, S.

    2010-01-01

    We present MPWide, a platform-independent communication library for performing message passing between computers. Our library allows coupling of several local message passing interface (MPI) applications through a long-distance network and is specifically optimized for such communications. The

  20. AGIS: Evolution of Distributed Computing Information system for ATLAS

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria; Karavakis, Edward

    2015-01-01

    The variety of the ATLAS Computing Infrastructure requires a central information system to define the topology of computing resources and to store the different parameters and configuration data which are needed by the various ATLAS software components. The ATLAS Grid Information System is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  1. Distributed computing environment monitoring and user expectations

    International Nuclear Information System (INIS)

    Cottrell, R.L.A.; Logg, C.A.

    1995-11-01

    This paper discusses the growing needs for distributed system monitoring and compares it to current practices. It then goes on to identify the components of distributed system monitoring and shows how they are implemented and successfully used at one site today to address the Local Area Network (LAN), network services and applications, the Wide Area Network (WAN), and host monitoring. It shows how this monitoring can be used to develop realistic service level expectations and also identifies the costs. Finally, the paper briefly discusses the future challenges in network monitoring

  2. Computation of the efficiency distribution of a multichannel focusing collimator

    International Nuclear Information System (INIS)

    Balasubramanian, A.; Venkateswaran, T.V.

    1977-01-01

    This article describes two computer methods of calculating the point source efficiency distribution functions of a focusing collimator with round tapered holes. The first method which computes only the geometric efficiency distribution is adequate for low energy collimators while the second method which computes both geometric and penetration efficiencies can be made use of for medium and high energy collimators. The scatter contribution to the efficiency is not taken into account. In the first method the efficiency distribution of a single cone of the collimator is obtained and the data are used for computing the distribution of the whole collimator. For high energy collimator the entire detector region is imagined to be divided into elemental areas. Efficiency of the elemental area is computed after suitably weighting for the penetration within the collimator septa, which is determined by three dimensional geometric techniques. The method of computing the line source efficiency distribution from point source distribution is also explained. The formulations have been tested by computing the efficiency distribution of several commercial collimators and collimators fabricated by us. (Auth.)

  3. Distributed metadata in a high performance computing environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  4. Deployment of the CMS software on the WLCG Grid

    International Nuclear Information System (INIS)

    Behrenhoff, W; Wissing, C; Kim, B; Blyweert, S; D'Hondt, J; Maes, J; Maes, M; Mulders, P Van; Villella, I; Vanelderen, L

    2011-01-01

    The CMS Experiment is taking high energy collision data at CERN. The computing infrastructure used to analyse the data is distributed round the world in a tiered structure. In order to use the 7 Tier-1 sites, the 50 Tier-2 sites and a still growing number of about 30 Tier-3 sites, the CMS software has to be available at those sites. Except for a very few sites the deployment and the removal of CMS software is managed centrally. Since the deployment team has no local accounts at the remote sites all installation jobs have to be sent via Grid jobs. Via a VOMS role the job has a high priority in the batch system and gains write privileges to the software area. Due to the lack of interactive access the installation jobs must be very robust against possible failures, in order not to leave a broken software installation. The CMS software is packaged in RPMs that are installed in the software area independent of the host OS. The apt-get tool is used to resolve package dependencies. This paper reports about the recent deployment experiences and the achieved performance.

  5. A lightweight communication library for distributed computing

    International Nuclear Information System (INIS)

    Groen, Derek; Rieder, Steven; Zwart, Simon Portegies; Grosso, Paola; Laat, Cees de

    2010-01-01

    We present MPWide, a platform-independent communication library for performing message passing between computers. Our library allows coupling of several local message passing interface (MPI) applications through a long-distance network and is specifically optimized for such communications. The implementation is deliberately kept lightweight and platform independent, and the library can be installed and used without administrative privileges. The only requirements are a C++ compiler and at least one open port to a wide-area network on each site. In this paper we present the library, describe the user interface, present performance tests and apply MPWide in a large-scale cosmological N-body simulation on a network of two computers, one in Amsterdam and the other in Tokyo.

  6. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    Science.gov (United States)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  7. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria

    2016-01-01

    AGIS is the information system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing (ADC) applications and services. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others.

  8. Predictive access control for distributed computation

    DEFF Research Database (Denmark)

    Yang, Fan; Hankin, Chris; Nielson, Flemming

    2013-01-01

    We show how to use aspect-oriented programming to separate security and trust issues from the logical design of mobile, distributed systems. The main challenge is how to enforce various types of security policies, in particular predictive access control policies — policies based on the future beh...... behavior of a program. A novel feature of our approach is that we can define policies concerning secondary use of data....

  9. Simulation model of load balancing in distributed computing systems

    Science.gov (United States)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  10. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  11. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  12. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  13. CMS experience of running glideinWMS in High Availability mode

    CERN Document Server

    Sfiligoi, Igor; Belforte, Stefano; Mc Crea, Alison Jean; Larson, Krista Elaine; Zvada, Marian; Holzman, Burt; P Mhashilkar; Bradley, Daniel Charles; Saiz Santos, Maria Dolores; Fanzago, Federica; Gutsche, Oliver; Martin, Terrence; Wuerthwein, Frank Karl

    2013-01-01

    The CMS experiment at the Large Hadron Collider is relying on the HTCondor-based glideinWMS batch system to handle most of its distributed computing needs. In order to minimize the risk of disruptions due to software and hardware problems, and also to simplify the maintenance procedures, CMS has set up its glideinWMS instance to use most of the attainable High Availability (HA) features. The setup involves running services distributed over multiple nodes, which in turn are located in several physical locations, including Geneva, Switzerland, Chicago, Illinois and San Diego, California. This paper describes the setup used by CMS, the HA limits of this setup, as well as a description of the actual operational experience spanning many months.

  14. 7th International Symposium on Intelligent Distributed Computing

    CERN Document Server

    Jung, Jason; Badica, Costin

    2014-01-01

    This book represents the combined peer-reviewed proceedings of the Seventh International Symposium on Intelligent Distributed Computing - IDC-2013, of the Second Workshop on Agents for Clouds - A4C-2013, of the Fifth International Workshop on Multi-Agent Systems Technology and Semantics - MASTS-2013, and of the International Workshop on Intelligent Robots - iR-2013. All the events were held in Prague, Czech Republic during September 4-6, 2013. The 41 contributions published in this book address many topics related to theory and applications of intelligent distributed computing and multi-agent systems, including: agent-based data processing, ambient intelligence, bio-informatics, collaborative systems, cryptography and security, distributed algorithms, grid and cloud computing, information extraction, intelligent robotics, knowledge management, linked data, mobile agents, ontologies, pervasive computing, self-organizing systems, peer-to-peer computing, social networks and trust, and swarm intelligence.  .

  15. Clock distribution system for digital computers

    International Nuclear Information System (INIS)

    Loomis, H.H.; Wyman, R.H.

    1981-01-01

    An apparatus is disclosed for eliminating, in each clock distribution amplifier of a clock distribution system, sequential pulse catch-up error due to one pulse ''overtaking'' a prior clock pulse. The apparatus includes timing means to produce a periodic electromagnetic signal with a fundamental frequency having a fundamental frequency component v'01(T); an array of N signal characteristic detector means, with detector means no. 1 receiving the timing means signal and producing a change-of-state signal v1(T) in response to receipt of a signal above a predetermined threshold; N substantially identical filter means, one filter means being operatively associated with each detector means, for receiving the change-of-state signal vn(T) and producing a modified change-of-state signal v'n(T) (N 1, . . . , n) having a fundamental frequency component that is substantially proportional to v'01(T- theta n(T) with a cumulative phase shift theta n(T) having a time derivative that may be made uniformly and arbitrarily small; and with the detector means n+1 (1 < or = n< n) receiving a modified change-of-state signal vn(T) from filter means no. N and, in response to receipt of such a signal above a predetermined threshold, producing a change-of-state signal vn+1

  16. Actors: A Model of Concurrent Computation in Distributed Systems.

    Science.gov (United States)

    1985-06-01

    Artificial Intelligence Labora- tory of the Massachusetts Institute of Technology. Support for the labora- tory’s aritificial intelligence research is...RD-A157 917 ACTORS: A MODEL OF CONCURRENT COMPUTATION IN 1/3- DISTRIBUTED SY𔃿TEMS(U) MASSACHUSETTS INST OF TECH CRMBRIDGE ARTIFICIAL INTELLIGENCE ...Computation In Distributed Systems Gui A. Aghai MIT Artificial Intelligence Laboratory Thsdocument ha. been cipp-oved I= pblicrelease and sale; itsI

  17. Temperature control of CMS Barrel ECAL (EB) : computational thermo-hydraulic model for dynamic behaviour, control aspects

    CERN Document Server

    Wertelaers, P

    2010-01-01

    The current design foresees a central heat exchanger followed by a controlled post heater, for all ECAL. We discuss the scheme and try to assess its performance, from a Barrel viewpoint. This is based on computational work. The coolant transfer pipes play an essential role in building a dynamical model. After some studies on the behaviour of the cooling circuit itself, a strong yet simple controller is proposed. Then, the system with feedback control is scrutinized, with emphasis on disturbance rejection. The most relevant disturbances are cooling ripple, pipe heat attack, and electronics’ switching.

  18. 9th International Symposium on Intelligent Distributed Computing

    CERN Document Server

    Camacho, David; Analide, Cesar; Seghrouchni, Amal; Badica, Costin

    2016-01-01

    This book represents the combined peer-reviewed proceedings of the ninth International Symposium on Intelligent Distributed Computing – IDC’2015, of the Workshop on Cyber Security and Resilience of Large-Scale Systems – WSRL’2015, and of the International Workshop on Future Internet and Smart Networks – FI&SN’2015. All the events were held in Guimarães, Portugal during October 7th-9th, 2015. The 46 contributions published in this book address many topics related to theory and applications of intelligent distributed computing, including: Intelligent Distributed Agent-Based Systems, Ambient Intelligence and Social Networks, Computational Sustainability, Intelligent Distributed Knowledge Representation and Processing, Smart Networks, Networked Intelligence and Intelligent Distributed Applications, amongst others.

  19. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  20. The CMS Muon System Alignment

    CERN Document Server

    Martinez Ruiz-Del-Arbol, P

    2009-01-01

    The alignment of the muon system of CMS is performed using different techniques: photogrammetry measurements, optical alignment and alignment with tracks. For track-based alignment, several methods are employed, ranging from a hit and impact point (HIP) algorithm and a procedure exploiting chamber overlaps to a global fit method based on the Millepede approach. For start-up alignment as long as available integrated luminosity is still significantly limiting the size of the muon sample from collisions, cosmic muon and beam halo signatures play a very strong role. During the last commissioning runs in 2008 the first aligned geometries have been produced and validated with data. The CMS offline computing infrastructure has been used in order to perform improved reconstructions. We present the computational aspects related to the calculation of alignment constants at the CERN Analysis Facility (CAF), the production and population of databases and the validation and performance in the official reconstruction. Also...

  1. CMS software deployment on OSG

    International Nuclear Information System (INIS)

    Kim, B; Avery, P; Thomas, M; Wuerthwein, F

    2008-01-01

    A set of software deployment tools has been developed for the installation, verification, and removal of a CMS software release. The tools that are mainly targeted for the deployment on the OSG have the features of instant release deployment, corrective resubmission of the initial installation job, and an independent web-based deployment portal with Grid security infrastructure login mechanism. We have been deploying over 500 installations and found the tools are reliable and adaptable to cope with problems with changes in the Grid computing environment and the software releases. We present the design of the tools, statistics that we gathered during the operation of the tools, and our experience with the CMS software deployment on the OSG Grid computing environment

  2. CMS software deployment on OSG

    Energy Technology Data Exchange (ETDEWEB)

    Kim, B; Avery, P [University of Florida, Gainesville, FL 32611 (United States); Thomas, M [California Institute of Technology, Pasadena, CA 91125 (United States); Wuerthwein, F [University of California at San Diego, La Jolla, CA 92093 (United States)], E-mail: bockjoo@phys.ufl.edu, E-mail: thomas@hep.caltech.edu, E-mail: avery@phys.ufl.edu, E-mail: fkw@fnal.gov

    2008-07-15

    A set of software deployment tools has been developed for the installation, verification, and removal of a CMS software release. The tools that are mainly targeted for the deployment on the OSG have the features of instant release deployment, corrective resubmission of the initial installation job, and an independent web-based deployment portal with Grid security infrastructure login mechanism. We have been deploying over 500 installations and found the tools are reliable and adaptable to cope with problems with changes in the Grid computing environment and the software releases. We present the design of the tools, statistics that we gathered during the operation of the tools, and our experience with the CMS software deployment on the OSG Grid computing environment.

  3. Developing a Distributed Computing Architecture at Arizona State University.

    Science.gov (United States)

    Armann, Neil; And Others

    1994-01-01

    Development of Arizona State University's computing architecture, designed to ensure that all new distributed computing pieces will work together, is described. Aspects discussed include the business rationale, the general architectural approach, characteristics and objectives of the architecture, specific services, and impact on the university…

  4. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    Science.gov (United States)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  5. The Architecture of the CMS Level-1 Trigger Control and Monitoring System

    CERN Document Server

    Magrans de Abril, Marc; Hammer, Josef; Hartl, Christian; Xie, Zhen

    2011-01-01

    The architecture of the Level-1 Trigger Control and Monitoring system for the CMS experiment is presented. This system has been installed and commissioned on the trigger online computers and is currently used for data taking at the LHC. This is a medium-size distributed system that runs over 40 PCs and 200 processes that control about 4000 electronic boards. It has been designed to handle the trigger configuration and monitoring during data taking as well as all communications with the main run control of CMS. Furthermore its design has foreseen the provision of the software infrastructure for detailed testing of the trigger system during beam down time.

  6. Perspectives on distributed computing : thirty people, four user types, and the distributed computing user experience.

    Energy Technology Data Exchange (ETDEWEB)

    Childers, L.; Liming, L.; Foster, I.; Mathematics and Computer Science; Univ. of Chicago

    2008-10-15

    This report summarizes the methodology and results of a user perspectives study conducted by the Community Driven Improvement of Globus Software (CDIGS) project. The purpose of the study was to document the work-related goals and challenges facing today's scientific technology users, to record their perspectives on Globus software and the distributed-computing ecosystem, and to provide recommendations to the Globus community based on the observations. Globus is a set of open source software components intended to provide a framework for collaborative computational science activities. Rather than attempting to characterize all users or potential users of Globus software, our strategy has been to speak in detail with a small group of individuals in the scientific community whose work appears to be the kind that could benefit from Globus software, learn as much as possible about their work goals and the challenges they face, and describe what we found. The result is a set of statements about specific individuals experiences. We do not claim that these are representative of a potential user community, but we do claim to have found commonalities and differences among the interviewees that may be reflected in the user community as a whole. We present these as a series of hypotheses that can be tested by subsequent studies, and we offer recommendations to Globus developers based on the assumption that these hypotheses are representative. Specifically, we conducted interviews with thirty technology users in the scientific community. We included both people who have used Globus software and those who have not. We made a point of including individuals who represent a variety of roles in scientific projects, for example, scientists, software developers, engineers, and infrastructure providers. The following material is included in this report: (1) A summary of the reported work-related goals, significant issues, and points of satisfaction with the use of Globus software

  7. Understanding and Improving the Performance Consistency of Distributed Computing Systems

    NARCIS (Netherlands)

    Yigitbasi, M.N.

    2012-01-01

    With the increasing adoption of distributed systems in both academia and industry, and with the increasing computational and storage requirements of distributed applications, users inevitably demand more from these systems. Moreover, users also depend on these systems for latency and throughput

  8. International Masterclass at CMS

    CERN Multimedia

    Lapka, M

    2012-01-01

    The CMS collaboration welcomed a class of French high school students to the CERN facility in Meyrin, Switzerland on the 12 of March, 2012. Students spent the day meeting with physicists, hearing talks, asking questions, and participating in a hands-on exercise using real data collected by the CMS experiment on the Large Hadron Colider. Talks and other resources are available here: http://ippog-dev.web.cern.ch/resources/2012/ippog-international-masterclass-2012-cms

  9. Arcade: A Web-Java Based Framework for Distributed Computing

    Science.gov (United States)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  10. Experiment Dashboard for Monitoring of the LHC Distributed Computing Systems

    International Nuclear Information System (INIS)

    Andreeva, J; Campos, M Devesas; Cros, J Tarragon; Gaidioz, B; Karavakis, E; Kokoszkiewicz, L; Lanciotti, E; Maier, G; Ollivier, W; Nowotka, M; Rocha, R; Sadykov, T; Saiz, P; Sargsyan, L; Sidorova, I; Tuckett, D

    2011-01-01

    LHC experiments are currently taking collisions data. A distributed computing model chosen by the four main LHC experiments allows physicists to benefit from resources spread all over the world. The distributed model and the scale of LHC computing activities increase the level of complexity of middleware, and also the chances of possible failures or inefficiencies in involved components. In order to ensure the required performance and functionality of the LHC computing system, monitoring the status of the distributed sites and services as well as monitoring LHC computing activities are among the key factors. Over the last years, the Experiment Dashboard team has been working on a number of applications that facilitate the monitoring of different activities: including following up jobs, transfers, and also site and service availabilities. This presentation describes Experiment Dashboard applications used by the LHC experiments and experience gained during the first months of data taking.

  11. 9th International conference on distributed computing and artificial intelligence

    CERN Document Server

    Santana, Juan; González, Sara; Molina, Jose; Bernardos, Ana; Rodríguez, Juan; DCAI 2012; International Symposium on Distributed Computing and Artificial Intelligence 2012

    2012-01-01

    The International Symposium on Distributed Computing and Artificial Intelligence 2012 (DCAI 2012) is a stimulating and productive forum where the scientific community can work towards future cooperation in Distributed Computing and Artificial Intelligence areas. This conference is a forum in which  applications of innovative techniques for solving complex problems will be presented. Artificial intelligence is changing our society. Its application in distributed environments, such as the internet, electronic commerce, environment monitoring, mobile communications, wireless devices, distributed computing, to mention only a few, is continuously increasing, becoming an element of high added value with social and economic potential, in industry, quality of life, and research. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. The exchange of ideas between scientists and technicians from both the academic and indus...

  12. Distributed computing and artificial intelligence : 10th International Conference

    CERN Document Server

    Neves, José; Rodriguez, Juan; Santana, Juan; Gonzalez, Sara

    2013-01-01

    The International Symposium on Distributed Computing and Artificial Intelligence 2013 (DCAI 2013) is a forum in which applications of innovative techniques for solving complex problems are presented. Artificial intelligence is changing our society. Its application in distributed environments, such as the internet, electronic commerce, environment monitoring, mobile communications, wireless devices, distributed computing, to mention only a few, is continuously increasing, becoming an element of high added value with social and economic potential, in industry, quality of life, and research. This conference is a stimulating and productive forum where the scientific community can work towards future cooperation in Distributed Computing and Artificial Intelligence areas. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. The exchange of ideas between scientists and technicians from both the academic and industry se...

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  14. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  15. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  16. Monte Carlo in radiotherapy: experience in a distributed computational environment

    Science.gov (United States)

    Caccia, B.; Mattia, M.; Amati, G.; Andenna, C.; Benassi, M.; D'Angelo, A.; Frustagli, G.; Iaccarino, G.; Occhigrossi, A.; Valentini, S.

    2007-06-01

    New technologies in cancer radiotherapy need a more accurate computation of the dose delivered in the radiotherapeutical treatment plan, and it is important to integrate sophisticated mathematical models and advanced computing knowledge into the treatment planning (TP) process. We present some results about using Monte Carlo (MC) codes in dose calculation for treatment planning. A distributed computing resource located in the Technologies and Health Department of the Italian National Institute of Health (ISS) along with other computer facilities (CASPUR - Inter-University Consortium for the Application of Super-Computing for Universities and Research) has been used to perform a fully complete MC simulation to compute dose distribution on phantoms irradiated with a radiotherapy accelerator. Using BEAMnrc and GEANT4 MC based codes we calculated dose distributions on a plain water phantom and air/water phantom. Experimental and calculated dose values below ±2% (for depth between 5 mm and 130 mm) were in agreement both in PDD (Percentage Depth Dose) and transversal sections of the phantom. We consider these results a first step towards a system suitable for medical physics departments to simulate a complete treatment plan using remote computing facilities for MC simulations.

  17. CMS Trigger Performance

    CERN Document Server

    Donato, Silvio

    2017-01-01

    During its second run of operation (Run 2) which started in 2015, the LHC will deliver a peak instantaneous luminosity that may reach $2 \\cdot 10^{34}$ cm$^{-2}$s$^{-1}$ with an average pile-up of about 55, far larger than the design value. Under these conditions, the online event selection is a very challenging task. In CMS, it is realized by a two-level trigger system the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm. In order to face this challenge, the L1 trigger has been through a major upgrade compared to Run 1, whereby all electronic boards of the system have been replaced, allowing more sophisticated algorithms to be run online. Its last stage, the global trigger, is now able to perform complex selections and to compute high-level quantities, like invariant masses. Likewise, the algorithms that run in the HLT go through big improvements; in particular, new appr...

  18. Auger Physicists visit CMS

    CERN Multimedia

    Hoch, Michael

    2012-01-01

    Visit at CERN P5 CMS in the experimental cavern Alan Watson, Auger Spokesperson Emeritus, University of Leeds; Jim Cronin, Nobel Laureate, Auger Spokesperson Emeritus, University of Chicago; Jim Virdee, CMS Former Spokesperson, Imperial College; Jim Matthews, Auger Co-Spokesperson, Louisiana State University

  19. CMS MANAGEMENT MEETINGS

    CERN Multimedia

    2010-01-01

    The Agendas and Minutes of the Management Board meetings are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223 The Agendas and Minutes of the Collaboration Board meetings are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174

  20. CMS MANAGEMENT MEETINGS

    CERN Multimedia

    The Agendas and Minutes of the Management Board meetings are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223  The Agendas and Minutes of the Collaboration Board meetings are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174 

  1. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    Science.gov (United States)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We

  2. Grid Interoperation with ARC middleware for the CMS experiment

    International Nuclear Information System (INIS)

    Edelmann, Erik; Groenager, Michael; Johansson, Daniel; Kleist, Josva; Field, Laurence; Qing, Di; Frey, Jaime; Happonen, Kalle; Klem, Jukka; Koivumaeki, Jesper; Linden, Tomas; Pirinen, Antti

    2010-01-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developing specific ARC plugins in CMS software.

  3. Grid Interoperation with ARC middleware for the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Edelmann, Erik; Groenager, Michael; Johansson, Daniel; Kleist, Josva [Nordic DataGrid Facility, Kastruplundgade 22, 1., DK-2770 Kastrup (Denmark); Field, Laurence; Qing, Di [CERN, CH-1211 Geneve 23 (Switzerland); Frey, Jaime [University of Wisconsin-Madison, 1210 W. Dayton St., Madison, WI (United States); Happonen, Kalle; Klem, Jukka; Koivumaeki, Jesper; Linden, Tomas; Pirinen, Antti, E-mail: Jukka.Klem@cern.c [Helsinki Institute of Physics, PO Box 64, FIN-00014 University of Helsinki (Finland)

    2010-04-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developing specific ARC plugins in CMS software.

  4. Cloud manufacturing distributed computing technologies for global and sustainable manufacturing

    CERN Document Server

    Mehnen, Jörn

    2013-01-01

    Global networks, which are the primary pillars of the modern manufacturing industry and supply chains, can only cope with the new challenges, requirements and demands when supported by new computing and Internet-based technologies. Cloud Manufacturing: Distributed Computing Technologies for Global and Sustainable Manufacturing introduces a new paradigm for scalable service-oriented sustainable and globally distributed manufacturing systems.   The eleven chapters in this book provide an updated overview of the latest technological development and applications in relevant research areas.  Following an introduction to the essential features of Cloud Computing, chapters cover a range of methods and applications such as the factors that actually affect adoption of the Cloud Computing technology in manufacturing companies and new geometrical simplification method to stream 3-Dimensional design and manufacturing data via the Internet. This is further supported case studies and real life data for Waste Electrical ...

  5. Running CMS remote analysis builder jobs on advanced resource connector middleware

    International Nuclear Information System (INIS)

    Edelmann, E; Happonen, K; Koivumäki, J; Lindén, T; Välimaa, J

    2011-01-01

    CMS user analysis jobs are distributed over the grid with the CMS Remote Analysis Builder application (CRAB). According to the CMS computing model the applications should run transparently on the different grid flavours in use. In CRAB this is handled with different plugins that are able to submit to different grids. Recently a CRAB plugin for submitting to the Advanced Resource Connector (ARC) middleware has been developed. The CRAB ARC plugin enables simple and fast job submission with full job status information available. CRAB can be used with a server which manages and monitors the grid jobs on behalf of the user. In the presentation we will report on the CRAB ARC plugin and on the status of integrating it with the CRAB server and compare this with using the gLite ARC interoperability method for job submission.

  6. The US-CMS Tier-1 Center Network Evolving toward 100Gbps

    International Nuclear Information System (INIS)

    Bobyshev, A; DeMar, P

    2011-01-01

    Fermilab hosts the US Tier-1 Center for the LHC's Compact Muon Collider (CMS) experiment. The Tier-1s are the central points for the processing and movement of LHC data. They sink raw data from the Tier-0 at CERN, process and store it locally, and then distribute the processed data to Tier-2s for simulation studies and analysis. The Fermilab Tier-1 Center is the largest of the CMS Tier-1s, accounting for roughly 35% of the experiment's Tier-1 computing and storage capacity. Providing capacious, resilient network services, both in terms of local network infrastructure and off-site data movement capabilities, presents significant challenges. This article will describe the current architecture, status, and near term plans for network support of the US-CMS Tier-1 facility.

  7. The CMS integration grid testbed

    Energy Technology Data Exchange (ETDEWEB)

    Graham, Gregory E.

    2004-08-26

    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distribution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuous two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. In this paper, we describe the process that led to one of the world's first continuously available, functioning grids.

  8. Distributed computing for real-time petroleum reservoir monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Ayodele, O. R. [University of Alberta, Edmonton, AB (Canada)

    2004-05-01

    Computer software architecture is presented to illustrate how the concept of distributed computing can be applied to real-time reservoir monitoring processes, permitting the continuous monitoring of the dynamic behaviour of petroleum reservoirs at much shorter intervals. The paper describes the fundamental technologies driving distributed computing, namely Java 2 Platform Enterprise edition (J2EE) by Sun Microsystems, and the Microsoft Dot-Net (Microsoft.Net) initiative, and explains the challenges involved in distributed computing. These are: (1) availability of permanently placed downhole equipment to acquire and transmit seismic data; (2) availability of high bandwidth to transmit the data; (3) security considerations; (4) adaptation of existing legacy codes to run on networks as downloads on demand; and (5) credibility issues concerning data security over the Internet. Other applications of distributed computing in the petroleum industry are also considered, specifically MWD, LWD and SWD (measurement-while-drilling, logging-while-drilling, and simulation-while-drilling), and drill-string vibration monitoring. 23 refs., 1 fig.

  9. First Experiences with LHC Grid Computing and Distributed Analysis

    CERN Document Server

    Fisk, Ian

    2010-01-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  10. Distributed Computing and Artificial Intelligence, 12th International Conference

    CERN Document Server

    Malluhi, Qutaibah; Gonzalez, Sara; Bocewicz, Grzegorz; Bucciarelli, Edgardo; Giulioni, Gianfranco; Iqba, Farkhund

    2015-01-01

    The 12th International Symposium on Distributed Computing and Artificial Intelligence 2015 (DCAI 2015) is a forum to present applications of innovative techniques for studying and solving complex problems. The exchange of ideas between scientists and technicians from both the academic and industrial sector is essential to facilitate the development of systems that can meet the ever-increasing demands of today’s society. The present edition brings together past experience, current work and promising future trends associated with distributed computing, artificial intelligence and their application in order to provide efficient solutions to real problems. This symposium is organized by the Osaka Institute of Technology, Qatar University and the University of Salamanca.

  11. A Distributed Computing Network for Real-Time Systems.

    Science.gov (United States)

    1980-11-03

    7 ) AU2 o NAVA TUNDEWATER SY$TEMS CENTER NEWPORT RI F/G 9/2 UIS RIBUT E 0 COMPUTIN G N LTWORK FOR REAL - TIME SYSTEMS .(U) UASSIFIED NOV Al 6 1...MORAIS - UT 92 dLEVEL c A Distributed Computing Network for Real - Time Systems . 11 𔃺-1 Gordon E/Morson I7 y tm- ,r - t "en t As J 2 -p .. - 7 I’ cNaval...NUMBER TD 5932 / N 4. TITLE mand SubotI. S. TYPE OF REPORT & PERIOD COVERED A DISTRIBUTED COMPUTING NETWORK FOR REAL - TIME SYSTEMS 6. PERFORMING ORG

  12. Distributed MRI reconstruction using Gadgetron-based cloud computing.

    Science.gov (United States)

    Xue, Hui; Inati, Souheil; Sørensen, Thomas Sangild; Kellman, Peter; Hansen, Michael S

    2015-03-01

    To expand the open source Gadgetron reconstruction framework to support distributed computing and to demonstrate that a multinode version of the Gadgetron can be used to provide nonlinear reconstruction with clinically acceptable latency. The Gadgetron framework was extended with new software components that enable an arbitrary number of Gadgetron instances to collaborate on a reconstruction task. This cloud-enabled version of the Gadgetron was deployed on three different distributed computing platforms ranging from a heterogeneous collection of commodity computers to the commercial Amazon Elastic Compute Cloud. The Gadgetron cloud was used to provide nonlinear, compressed sensing reconstruction on a clinical scanner with low reconstruction latency (eg, cardiac and neuroimaging applications). The proposed setup was able to handle acquisition and 11 -SPIRiT reconstruction of nine high temporal resolution real-time, cardiac short axis cine acquisitions, covering the ventricles for functional evaluation, in under 1 min. A three-dimensional high-resolution brain acquisition with 1 mm(3) isotropic pixel size was acquired and reconstructed with nonlinear reconstruction in less than 5 min. A distributed computing enabled Gadgetron provides a scalable way to improve reconstruction performance using commodity cluster computing. Nonlinear, compressed sensing reconstruction can be deployed clinically with low image reconstruction latency. © 2014 Wiley Periodicals, Inc.

  13. Proceedings: Distributed digital systems, plant process computers, and networks

    International Nuclear Information System (INIS)

    1995-03-01

    These are the proceedings of a workshop on Distributed Digital Systems, Plant Process Computers, and Networks held in Charlotte, North Carolina on August 16--18, 1994. The purpose of the workshop was to provide a forum for technology transfer, technical information exchange, and education. The workshop was attended by more than 100 representatives of electric utilities, equipment manufacturers, engineering service organizations, and government agencies. The workshop consisted of three days of presentations, exhibitions, a panel discussion and attendee interactions. Original plant process computers at the nuclear power plants are becoming obsolete resulting in increasing difficulties in their effectiveness to support plant operations and maintenance. Some utilities have already replaced their plant process computers by more powerful modern computers while many other utilities intend to replace their aging plant process computers in the future. Information on recent and planned implementations are presented. Choosing an appropriate communications and computing network architecture facilitates integrating new systems and provides functional modularity for both hardware and software. Control room improvements such as CRT-based distributed monitoring and control, as well as digital decision and diagnostic aids, can improve plant operations. Commercially available digital products connected to the plant communications system are now readily available to provide distributed processing where needed. Plant operations, maintenance activities, and engineering analyses can be supported in a cost-effective manner. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database

  14. STADIC: a computer code for combining probability distributions

    International Nuclear Information System (INIS)

    Cairns, J.J.; Fleming, K.N.

    1977-03-01

    The STADIC computer code uses a Monte Carlo simulation technique for combining probability distributions. The specific function for combination of the input distribution is defined by the user by introducing the appropriate FORTRAN statements to the appropriate subroutine. The code generates a Monte Carlo sampling from each of the input distributions and combines these according to the user-supplied function to provide, in essence, a random sampling of the combined distribution. When the desired number of samples is obtained, the output routine calculates the mean, standard deviation, and confidence limits for the resultant distribution. This method of combining probability distributions is particularly useful in cases where analytical approaches are either too difficult or undefined

  15. A Distributed Snapshot Protocol for Efficient Artificial Intelligence Computation in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    JongBeom Lim

    2018-01-01

    Full Text Available Many artificial intelligence applications often require a huge amount of computing resources. As a result, cloud computing adoption rates are increasing in the artificial intelligence field. To support the demand for artificial intelligence applications and guarantee the service level agreement, cloud computing should provide not only computing resources but also fundamental mechanisms for efficient computing. In this regard, a snapshot protocol has been used to create a consistent snapshot of the global state in cloud computing environments. However, the existing snapshot protocols are not optimized in the context of artificial intelligence applications, where large-scale iterative computation is the norm. In this paper, we present a distributed snapshot protocol for efficient artificial intelligence computation in cloud computing environments. The proposed snapshot protocol is based on a distributed algorithm to run interconnected multiple nodes in a scalable fashion. Our snapshot protocol is able to deal with artificial intelligence applications, in which a large number of computing nodes are running. We reveal that our distributed snapshot protocol guarantees the correctness, safety, and liveness conditions.

  16. Secure Computation, I/O-Efficient Algorithms and Distributed Signatures

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Kölker, Jonas; Toft, Tomas

    2012-01-01

    values of form r, gr for random secret-shared r ∈ ℤq and gr in a group of order q. This costs a constant number of exponentiation per player per value generated, even if less than n/3 players are malicious. This can be used for efficient distributed computing of Schnorr signatures. We further develop...... the technique so we can sign secret data in a distributed fashion at essentially the same cost....

  17. Organization of the secure distributed computing based on multi-agent system

    Science.gov (United States)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  18. Distributed computing system with dual independent communications paths between computers and employing split tokens

    Science.gov (United States)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  19. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    Science.gov (United States)

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  20. Integration of cloud resources in the LHCb distributed computing

    International Nuclear Information System (INIS)

    García, Mario Úbeda; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel; Muñoz, Víctor Méndez

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  1. Integration of Cloud resources in the LHCb Distributed Computing

    Science.gov (United States)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  2. The future of PanDA in ATLAS distributed computing

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.

  3. CMS analysis school model

    International Nuclear Information System (INIS)

    Malik, S; Bloom, K; Shipsey, I; Cavanaugh, R; Klima, B; Chan, Kai-Feng; D'Hondt, J; Narain, M; Palla, F; Rolandi, G; Schörner-Sadenius, T

    2014-01-01

    To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born over three years ago at the LPC (LHC Physics Centre), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe, CMS is trying to engage the collaboration in its discovery potential and maximize physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents, in the development of physics, service, upgrade, education of those new to CMS and the career development of younger members. An extension of the concept to the dedicated software and hardware schools is also planned, keeping in mind the ensuing upgrade phase.

  4. CMS Analysis School Model

    Energy Technology Data Exchange (ETDEWEB)

    Malik, S. [Nebraska U.; Shipsey, I. [Purdue U.; Cavanaugh, R. [Illinois U., Chicago; Bloom, K. [Nebraska U.; Chan, Kai-Feng [Taiwan, Natl. Taiwan U.; D' Hondt, J. [Vrije U., Brussels; Klima, B. [Fermilab; Narain, M. [Brown U.; Palla, F. [INFN, Pisa; Rolandi, G. [CERN; Schörner-Sadenius, T. [DESY

    2014-01-01

    To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born over three years ago at the LPC (LHC Physics Centre), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe, CMS is trying to engage the collaboration in its discovery potential and maximize physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents, in the development of physics, service, upgrade, education of those new to CMS and the career development of younger members. An extension of the concept to the dedicated software and hardware schools is also planned, keeping in mind the ensuing upgrade phase.

  5. Pharmacokinetics of Colistin Methansulphonate (CMS) and Colistin after CMS Nebulisation in Baboon Monkeys.

    Science.gov (United States)

    Marchand, Sandrine; Bouchene, Salim; de Monte, Michèle; Guilleminault, Laurent; Montharu, Jérôme; Cabrera, Maria; Grégoire, Nicolas; Gobin, Patrice; Diot, Patrice; Couet, William; Vecellio, Laurent

    2015-10-01

    The objective of this study was to compare two different nebulizers: Eflow rapid® and Pari LC star® by scintigraphy and PK modeling to simulate epithelial lining fluid concentrations from measured plasma concentrations, after nebulization of CMS in baboons. Three baboons received CMS by IV infusion and by 2 types of aerosols generators and colistin by subcutaneous infusion. Gamma imaging was performed after nebulisation to determine colistin distribution in lungs. Blood samples were collected during 9 h and colistin and CMS plasma concentrations were measured by LC-MS/MS. A population pharmacokinetic analysis was conducted and simulations were performed to predict lung concentrations after nebulization. Higher aerosol distribution into lungs was observed by scintigraphy, when CMS was nebulized with Pari LC® star than with Eflow Rapid® nebulizer. This observation was confirmed by the fraction of CMS deposited into the lung (respectively 3.5% versus 1.3%).CMS and colistin simulated concentrations in epithelial lining fluid were higher after using the Pari LC star® than the Eflow rapid® system. A limited fraction of CMS reaches lungs after nebulization, but higher colistin plasma concentrations were measured and higher intrapulmonary colistin concentrations were simulated with the Pari LC Star® than with the Eflow Rapid® system.

  6. Protect Heterogeneous Environment Distributed Computing from Malicious Code Assignment

    Directory of Open Access Journals (Sweden)

    V. S. Gorbatov

    2011-09-01

    Full Text Available The paper describes the practical implementation of the protection system of heterogeneous environment distributed computing from malicious code for the assignment. A choice of technologies, development of data structures, performance evaluation of the implemented system security are conducted.

  7. Computed tomography of surface related radionuclide distributions ('BONN'-tomography)

    International Nuclear Information System (INIS)

    Bockisch, A.; Koenig, R.

    1989-01-01

    A method called the 'BONN' tomography is described to produce planar projections of circular activity distributions using standard single photon emission computed tomography. The clinical value of the method is demonstrated for bone scans of the jaw, thorax, and pelvis. Numerical or projection-related problems are discussed. (orig.) [de

  8. Distributed Computing with Centralized Support Works at Brigham Young.

    Science.gov (United States)

    McDonald, Kelly; Stone, Brad

    1992-01-01

    Brigham Young University (Utah) has addressed the need for maintenance and support of distributed computing systems on campus by implementing a program patterned after a national business franchise, providing the support and training of a centralized administration but allowing each unit to operate much as an independent small business.…

  9. The Future of PanDA in ATLAS Distributed Computing

    CERN Document Server

    De, Kaushik; The ATLAS collaboration; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyze the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favor of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addi...

  10. CMS tracker visualization tools

    CERN Document Server

    Zito, G; Osborne, I; Regano, A

    2005-01-01

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking.

  11. CMS tracker visualization tools

    Energy Technology Data Exchange (ETDEWEB)

    Mennea, M.S. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy); Osborne, I. [Northeastern University, 360 Huntington Avenue, Boston, MA 02115 (United States); Regano, A. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy); Zito, G. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy)]. E-mail: giuseppe.zito@ba.infn.it

    2005-08-21

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking.

  12. CMS tracker visualization tools

    International Nuclear Information System (INIS)

    Mennea, M.S.; Osborne, I.; Regano, A.; Zito, G.

    2005-01-01

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking

  13. CMS brochure (English version)

    CERN Document Server

    2017-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which has started up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.CMS est la plus lourde des expériences du LHC, l'accélérateur de particules le plus puissant au monde qui a été mis en service en 2008. Les détecteurs de cette expérience polyvalente sont placés autour d'un puissant aimant supraconducteur.

  14. CMS brochure (French version)

    CERN Document Server

    2017-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which has started up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.CMS est la plus lourde des expériences du LHC, l'accélérateur de particules le plus puissant au monde qui a été mis en service en 2008. Les détecteurs de cette expérience polyvalente sont placés autour d'un puissant aimant supraconducteur.

  15. CMS Higgs boson results

    CERN Document Server

    Bluj, Michal Jacek

    2018-01-01

    In this report we review recent Higgs boson results obtained with pp collisions at $\\sqrt{s}=\\,$13 TeV recorded by the CMS detector in 2016 for an integrated luminosity of 35.9fb$^{\\text{-1}}$. The 2016 data allowed the observation of the $H \\to \\tau\\tau$ and $H \\to WW$ decays with high significance. We also present a combined measurement based on a full set of CMS analyses performed with 2016 data. These results are compatible with the standard model predictions with precision of several measurements exceeding results from combination of ATLAS and CMS data collected in 2011 and 2012.

  16. Data Scouting in CMS

    CERN Document Server

    Anderson, Dustin James

    2016-01-01

    In 2011, the CMS collaboration introduced Data Scouting as a way to produce physics results with events that cannot be stored on disk, due to resource limits in the data acquisition and offline infrastructure. The viability of this technique was demonstrated in 2012, when 18 fb$^{-1}$ of collision data at $\\sqrt{s}$ = 8 TeV were collected. The technique is now a standard ingredient of CMS and ATLAS data-taking strategy. In this talk, we present the status of data scouting in CMS and the improvements introduced in 2015 and 2016, which promoted data scouting to a full-fledged, flexible discovery tool for the LHC Run II.

  17. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Di Girolamo, A; Jezequel, S; Ueda, I; Wenaus, T

    2013-01-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources.\\\\ During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visua...

  18. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Di Girolamo, A; Jezequel, S; Ueda, I; Wenaus, T

    2014-01-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources.\\\\ During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visua...

  19. Vertical Load Distribution for Cloud Computing via Multiple Implementation Options

    Science.gov (United States)

    Phan, Thomas; Li, Wen-Syan

    Cloud computing looks to deliver software as a provisioned service to end users, but the underlying infrastructure must be sufficiently scalable and robust. In our work, we focus on large-scale enterprise cloud systems and examine how enterprises may use a service-oriented architecture (SOA) to provide a streamlined interface to their business processes. To scale up the business processes, each SOA tier usually deploys multiple servers for load distribution and fault tolerance, a scenario which we term horizontal load distribution. One limitation of this approach is that load cannot be distributed further when all servers in the same tier are loaded. In complex multi-tiered SOA systems, a single business process may actually be implemented by multiple different computation pathways among the tiers, each with different components, in order to provide resilience and scalability. Such multiple implementation options gives opportunities for vertical load distribution across tiers. In this chapter, we look at a novel request routing framework for SOA-based enterprise computing with multiple implementation options that takes into account the options of both horizontal and vertical load distribution.

  20. Use of the gLite-WMS in CMS for production and analysis

    International Nuclear Information System (INIS)

    Codispoti, G; Grandi, C; Fanfani, A; Bonacorsi, D; Spiga, D; Sciaba', A; Lemaitre, S; Litmaath, M; Calas, Y; Cinquilli, M; Farina, F; Miccio, V; Sartirana, A; Dongiovanni, D; Cesini, D; Fanzago, F; Lacaprara, S; Belforte, S; Wakefield, S; Hernandez, J

    2010-01-01

    The CMS experiment at LHC started using the Resource Broker (by the EDG and LCG projects) to submit Monte Carlo production and analysis jobs to distributed computing resources of the WLCG infrastructure over 6 years ago. Since 2006 the gLite Workload Management System (WMS) and Logging and Bookkeeping (LB) are used. The interaction with the gLite-WMS/LB happens through the CMS production and analysis frameworks, respectively ProdAgent and CRAB, through a common component, BOSSLite. The important improvements recently made in the gLite-WMS/LB as well as in the CMS tools and the intrinsic independence of different WMS/LB instances allow CMS to reach the stability and scalability needed for LHC operations. In particular the use of a multi-threaded approach in BOSSLite allowed to increase the scalability of the systems significantly. In this work we present the operational set up of CMS production and analysis based on the gLite-WMS and the performances obtained in the past data challenges and in the daily Monte Carlo productions and user analysis usage in the experiment.

  1. Distributed interactive graphics applications in computational fluid dynamics

    International Nuclear Information System (INIS)

    Rogers, S.E.; Buning, P.G.; Merritt, F.J.

    1987-01-01

    Implementation of two distributed graphics programs used in computational fluid dynamics is discussed. Both programs are interactive in nature. They run on a CRAY-2 supercomputer and use a Silicon Graphics Iris workstation as the front-end machine. The hardware and supporting software are from the Numerical Aerodynamic Simulation project. The supercomputer does all numerically intensive work and the workstation, as the front-end machine, allows the user to perform real-time interactive transformations on the displayed data. The first program was written as a distributed program that computes particle traces for fluid flow solutions existing on the supercomputer. The second is an older post-processing and plotting program modified to run in a distributed mode. Both programs have realized a large increase in speed over that obtained using a single machine. By using these programs, one can learn quickly about complex features of a three-dimensional flow field. Some color results are presented

  2. submitter Studies of CMS data access patterns with machine learning techniques

    CERN Document Server

    De Luca, Silvia

    This thesis presents a study of the Grid data access patterns in distributed analysis in the CMS experiment at the LHC accelerator. This study ranges from the deep analysis of the historical patterns of access to the most relevant data types in CMS, to the exploitation of a supervised Machine Learning classification system to set-up a machinery able to eventually predict future data access patterns - i.e. the so-called dataset “popularity” of the CMS datasets on the Grid - with focus on specific data types. All the CMS workflows run on the Worldwide LHC Computing Grid (WCG) computing centers (Tiers), and in particular the distributed analysis systems sustains hundreds of users and applications submitted every day. These applications (or “jobs”) access different data types hosted on disk storage systems at a large set of WLCG Tiers. The detailed study of how this data is accessed, in terms of data types, hosting Tiers, and different time periods, allows to gain precious insight on storage occupancy ove...

  3. Configuration monitoring tool for large-scale distributed computing

    International Nuclear Information System (INIS)

    Wu, Y.; Graham, G.; Lu, X.; Afaq, A.; Kim, B.J.; Fisk, I.

    2004-01-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN will likely use a grid system to achieve much of its offline processing need. Given the heterogeneous and dynamic nature of grid systems, it is desirable to have in place a configuration monitor. The configuration monitoring tool is built using the Globus toolkit and web services. It consists of an information provider for the Globus MDS, a relational database for keeping track of the current and old configurations, and client interfaces to query and administer the configuration system. The Grid Security Infrastructure (GSI), together with EDG Java Security packages, are used for secure authentication and transparent access to the configuration information across the CMS grid. This work has been prototyped and tested using US-CMS grid resources

  4. Configuration monitoring tool for large-scale distributed computing

    CERN Document Server

    Wu, Y; Fisk, I; Graham, G; Kim, B J; Lü, X

    2004-01-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN will likely use a grid system to achieve much of its offline processing need. Given the heterogeneous and dynamic nature of grid systems, it is desirable to have in place a configuration monitor. The configuration monitoring tool is built using the Globus toolkit and web services. It consists of an information provider for the Globus MDS, a relational database for keeping track of the current and old configurations, and client interfaces to query and administer the configuration system. The Grid Security Infrastructure (GSI), together with EDG Java Security packages, are used for secure authentication and transparent access to the configuration information across the CMS grid. This work has been prototyped and tested using US-CMS grid resources.

  5. The CMS Integration Grid Testbed

    CERN Document Server

    Graham, G E; Aziz, Shafqat; Bauerdick, L.A.T.; Ernst, Michael; Kaiser, Joseph; Ratnikova, Natalia; Wenzel, Hans; Wu, Yu-jun; Aslakson, Erik; Bunn, Julian; Iqbal, Saima; Legrand, Iosif; Newman, Harvey; Singh, Suresh; Steenberg, Conrad; Branson, James; Fisk, Ian; Letts, James; Arbree, Adam; Avery, Paul; Bourilkov, Dimitri; Cavanaugh, Richard; Rodriguez, Jorge Luis; Kategari, Suchindra; Couvares, Peter; DeSmet, Alan; Livny, Miron; Roy, Alain; Tannenbaum, Todd; Graham, Gregory E.; Aziz, Shafqat; Ernst, Michael; Kaiser, Joseph; Ratnikova, Natalia; Wenzel, Hans; Wu, Yujun; Aslakson, Erik; Bunn, Julian; Iqbal, Saima; Legrand, Iosif; Newman, Harvey; Singh, Suresh; Steenberg, Conrad; Branson, James; Fisk, Ian; Letts, James; Arbree, Adam; Avery, Paul; Bourilkov, Dimitri; Cavanaugh, Richard; Rodriguez, Jorge; Kategari, Suchindra; Couvares, Peter; Smet, Alan De; Livny, Miron; Roy, Alain; Tannenbaum, Todd

    2003-01-01

    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distrib ution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuo us two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. ...

  6. CMS results in Electroweak Physics

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    We present the results of electroweak studies performed using data collected in 2010 at a center-of-mass energy of 7 TeV by the CMS experiment at the LHC. Besides their intrinsic interest as unique samples to calibrate and understand the CMS detector response to leptons, jets and missing energy, events containing W and Z bosons appear as dominant components in many Higgs seaches and in most of the searches beyond the Standard Model, either as signal or as background. In addition, the excellent level of theoretical and experimental understanding of these processes allows electroweak tests at the LHC at an unprecendented level of precision. CMS uses a wide range of final states to measure cross sections, asymmetries, polarizations and differential distributions in general. The current integrated luminosity is already sufficient to perform not just inclusive measurements using W and Z decays into muons and electrons, but also precise studies of associated jet production and final states containing taus, as well...

  7. The CMS link system

    International Nuclear Information System (INIS)

    Vila, I.

    1999-01-01

    The Compact Muon Solenoid (CMS) is a multi-purpose detector that is going to be installed in the future Large Hadron Collider (LHC) at CERN. Muons are one of the main physical signatures of the expected new physics. The muons are going to be detected by the Central Tracker (CT) and the Muon Spectrometer (MS). Both, the CT and MS can provide an independent muon momentum measurement, but for all η and momentum values the highest precision for muon momentum measurement is achieved when the muon tracks are reconstructed using both tracking detectors. The calorimeters and the solenoid volumes separate about three meters the CT and the MS. It has been shown that the alignment of the CT with respect to the MS can not be guaranteed by a software alignment in a reasonable time scale. Therefore, an opto-mechanical system (the multipoint link system) have been designed to monitor, on-line, the relative position of both sub-detectors providing a common reference frame for both of them. The local alignment of the muon barrel spectrometer determines the relative position of the muon chambers with respect to themselves and also with respect to a carbon fiber rigid structure called MAB (Module for the Alignment of the Barrel). There are a total of 36 MABs distributed in the boundary planes of each muon spectrometer sector. This paper describes all the equipment and presents the principle of measurement. (author)

  8. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  9. Computation of distribution of minimum resolution for log-normal distribution of chromatographic peak heights.

    Science.gov (United States)

    Davis, Joe M

    2011-10-28

    General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. AGIS: Evolution of Distributed Computing information system for ATLAS

    Science.gov (United States)

    Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.

    2015-12-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  11. Guide to cloud computing for business and technology managers from distributed computing to cloudware applications

    CERN Document Server

    Kale, Vivek

    2014-01-01

    Guide to Cloud Computing for Business and Technology Managers: From Distributed Computing to Cloudware Applications unravels the mystery of cloud computing and explains how it can transform the operating contexts of business enterprises. It provides a clear understanding of what cloud computing really means, what it can do, and when it is practical to use. Addressing the primary management and operation concerns of cloudware, including performance, measurement, monitoring, and security, this pragmatic book:Introduces the enterprise applications integration (EAI) solutions that were a first ste

  12. CMS Space Monitoring

    Science.gov (United States)

    Ratnikova, N.; Huang, C.-H.; Sanchez-Hernandez, A.; Wildish, T.; Zhang, X.

    2014-06-01

    During the first LHC run, CMS stored about one hundred petabytes of data. Storage accounting and monitoring help to meet the challenges of storage management, such as efficient space utilization, fair share between users and groups and resource planning. We present a newly developed CMS space monitoring system based on the storage metadata dumps produced at the sites. The information extracted from the storage dumps is aggregated and uploaded to a central database. A web based data service is provided to retrieve the information for a given time interval and a range of sites, so it can be further aggregated and presented in the desired format. The system has been designed based on the analysis of CMS monitoring requirements and experiences of the other LHC experiments. In this paper, we demonstrate how the existing software components of the CMS data placement system, PhEDEx, have been re-used, dramatically reducing the development effort.

  13. CMS Financial Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — This section contains the annual CMS financial statements as required under the Chief Financial Officers (CFO) Act of 1990 (P.L. 101-576). The CFO Act marked a major...

  14. CMS Statistics Reference Booklet

    Data.gov (United States)

    U.S. Department of Health & Human Services — The annual CMS Statistics reference booklet provides a quick reference for summary information about health expenditures and the Medicare and Medicaid health...

  15. CMS Space Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Ratnikova, N. [Fermilab; Huang, C.-H. [Fermilab; Sanchez-Hernandez, A. [CINVESTAV, IPN; Wildish, T. [Princeton U.; Zhang, X. [Beijing, Inst. High Energy Phys.

    2014-01-01

    During the first LHC run, CMS stored about one hundred petabytes of data. Storage accounting and monitoring help to meet the challenges of storage management, such as efficient space utilization, fair share between users and groups and resource planning. We present a newly developed CMS space monitoring system based on the storage metadata dumps produced at the sites. The information extracted from the storage dumps is aggregated and uploaded to a central database. A web based data service is provided to retrieve the information for a given time interval and a range of sites, so it can be further aggregated and presented in the desired format. The system has been designed based on the analysis of CMS monitoring requirements and experiences of the other LHC experiments. In this paper, we demonstrate how the existing software components of the CMS data placement system, PhEDEx, have been re-used, dramatically reducing the development effort.

  16. CMS cavern inspection robot

    CERN Document Server

    Ibrahim, Ibrahim

    2017-01-01

    Robots which are immune to the CMS cavern environment, wirelessly controlled: -One actuated by smart materials (Ionic Polymer-Metal Composites and Macro Fiber Composites) -One regular brushed DC rover -One servo-driven rover -Stair-climbing robot

  17. Grid Interoperation with ARC Middleware for the CMS Experiment

    CERN Document Server

    Edelmann, Erik; Frey, Jaime; Gronager, Michael; Happonen, Kalle; Johansson, Daniel; Kleist, Josva; Klem, Jukka; Koivumaki, Jesper; Linden, Tomas; Pirinen, Antti; Qing, Di

    2010-01-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developi...

  18. Distributed parallel computing in stochastic modeling of groundwater systems.

    Science.gov (United States)

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  19. Software for Distributed Computation on Medical Databases: A Demonstration Project

    Directory of Open Access Journals (Sweden)

    Balasubramanian Narasimhan

    2017-05-01

    Full Text Available Bringing together the information latent in distributed medical databases promises to personalize medical care by enabling reliable, stable modeling of outcomes with rich feature sets (including patient characteristics and treatments received. However, there are barriers to aggregation of medical data, due to lack of standardization of ontologies, privacy concerns, proprietary attitudes toward data, and a reluctance to give up control over end use. Aggregation of data is not always necessary for model fitting. In models based on maximizing a likelihood, the computations can be distributed, with aggregation limited to the intermediate results of calculations on local data, rather than raw data. Distributed fitting is also possible for singular value decomposition. There has been work on the technical aspects of shared computation for particular applications, but little has been published on the software needed to support the "social networking" aspect of shared computing, to reduce the barriers to collaboration. We describe a set of software tools that allow the rapid assembly of a collaborative computational project, based on the flexible and extensible R statistical software and other open source packages, that can work across a heterogeneous collection of database environments, with full transparency to allow local officials concerned with privacy protections to validate the safety of the method. We describe the principles, architecture, and successful test results for the site-stratified Cox model and rank-k singular value decomposition.

  20. HEP@Home - A distributed computing system based on BOINC

    CERN Document Server

    Amorim, A; Andrade, P; Amorim, Antonio; Villate, Jaime; Andrade, Pedro

    2005-01-01

    Project SETI@HOME has proven to be one of the biggest successes of distributed computing during the last years. With a quite simple approach SETI manages to process large volumes of data using a vast amount of distributed computer power. To extend the generic usage of this kind of distributed computing tools, BOINC is being developed. In this paper we propose HEP@HOME, a BOINC version tailored to the specific requirements of the High Energy Physics (HEP) community. The HEP@HOME will be able to process large amounts of data using virtually unlimited computing power, as BOINC does, and it should be able to work according to HEP specifications. In HEP the amounts of data to be analyzed or reconstructed are of central importance. Therefore, one of the design principles of this tool is to avoid data transfer. This will allow scientists to run their analysis applications and taking advantage of a large number of CPUs. This tool also satisfies other important requirements in HEP, namely, security, fault-tolerance an...

  1. Cryptographically Secure Multiparty Computation and Distributed Auctions Using Homomorphic Encryption

    Directory of Open Access Journals (Sweden)

    Anunay Kulshrestha

    2017-12-01

    Full Text Available We introduce a robust framework that allows for cryptographically secure multiparty computations, such as distributed private value auctions. The security is guaranteed by two-sided authentication of all network connections, homomorphically encrypted bids, and the publication of zero-knowledge proofs of every computation. This also allows a non-participant verifier to verify the result of any such computation using only the information broadcasted on the network by each individual bidder. Building on previous work on such systems, we design and implement an extensible framework that puts the described ideas to practice. Apart from the actual implementation of the framework, our biggest contribution is the level of protection we are able to guarantee from attacks described in previous work. In order to provide guidance to users of the library, we analyze the use of zero knowledge proofs in ensuring the correct behavior of each node in a computation. We also describe the usage of the library to perform a private-value distributed auction, as well as the other challenges in implementing the protocol, such as auction registration and certificate distribution. Finally, we provide performance statistics on our implementation of the auction.

  2. Improving flow distribution in influent channels using computational fluid dynamics.

    Science.gov (United States)

    Park, No-Suk; Yoon, Sukmin; Jeong, Woochang; Lee, Seungjae

    2016-10-01

    Although the flow distribution in an influent channel where the inflow is split into each treatment process in a wastewater treatment plant greatly affects the efficiency of the process, and a weir is the typical structure for the flow distribution, to the authors' knowledge, there is a paucity of research on the flow distribution in an open channel with a weir. In this study, the influent channel of a real-scale wastewater treatment plant was used, installing a suppressed rectangular weir that has a horizontal crest to cross the full channel width. The flow distribution in the influent channel was analyzed using a validated computational fluid dynamics model to investigate (1) the comparison of single-phase and two-phase simulation, (2) the improved procedure of the prototype channel, and (3) the effect of the inflow rate on flow distribution. The results show that two-phase simulation is more reliable due to the description of the free-surface fluctuations. It should first be considered for improving flow distribution to prevent a short-circuit flow, and the difference in the kinetic energy with the inflow rate makes flow distribution trends different. The authors believe that this case study is helpful for improving flow distribution in an influent channel.

  3. The CMS Electronic Logbook

    CERN Multimedia

    Bukowiec, S; Beccati, B; Behrens, U; Biery, K; Branson, J; Cano, E; Cheung, H; Ciganek, M; Cittolin, S; Coarasa Perez, J A; Deldicque, C; Erhan, S; Gigi, D; Glege, F; Gomez-Reino, R; Hatton, D; Hwong, Y L; Loizides, C; Ma, F; Masetti, L; Meijers, F; Meschi, E; Meyer, A; Mommsen, R K; Moser, R; O’Dell, V; Orsini, L; Paus, C; Petrucci, A; Pieri, M; Racz, A; Raginel, O; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Shpakov, D; Simon, M; Sumorok, K; Sungho Yoon, A

    2010-01-01

    The CMS ELogbook (ELog) is a collaborative tool, which provides a platform to share and store information about various events or problems occurring in the Compact Muon Solenoid (CMS) experiment at CERN during operation. The ELog is based on a Model–View–Controller (MVC) software architectural pattern and uses an Oracle database to store messages and attachments. The ELog is developed as a pluggable web component in Oracle Portal in order to provide better management, monitoring and security.

  4. Forward physics with CMS

    CERN Document Server

    Grothe, Monika

    2008-01-01

    Forward physics with CMS at the LHC covers a wide range of physics subjects, including very low-x_Bj QCD, underlying event and multiple interactions characteristics, gamma-mediated processes, shower development at the energy scale of primary cosmic ray interactions with the atmosphere, diffraction in the presence of a hard scale and even MSSM Higgs discovery in central exclusive production. Selected feasibility studies to illustrate the forward physics potential of CMS are presented.

  5. File and metadata management for BESIII distributed computing

    International Nuclear Information System (INIS)

    Nicholson, C; Zheng, Y H; Lin, L; Deng, Z Y; Li, W D; Zhang, X M

    2012-01-01

    The BESIII experiment at the Institute of High Energy Physics (IHEP), Beijing, uses the high-luminosity BEPCII e + e − collider to study physics in the π-charm energy region around 3.7 GeV; BEPCII has produced the worlds largest samples of J/φ and φ’ events to date. An order of magnitude increase in the data sample size over the 2011-2012 data-taking period demanded a move from a very centralized to a distributed computing environment, as well as the development of an efficient file and metadata management system. While BESIII is on a smaller scale than some other HEP experiments, this poses particular challenges for its distributed computing and data management system. These constraints include limited resources and manpower, and low quality of network connections to IHEP. Drawing on the rich experience of the HEP community, a system has been developed which meets these constraints. The design and development of the BESIII distributed data management system, including its integration with other BESIII distributed computing components, such as job management, are presented here.

  6. Computational strategies for three-dimensional flow simulations on distributed computer systems

    Science.gov (United States)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-08-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  7. Computational strategies for three-dimensional flow simulations on distributed computer systems

    Science.gov (United States)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  8. CMS geometry through 2020

    International Nuclear Information System (INIS)

    Osborne, I; Brownson, E; Eulisse, G; Jones, C D; Sexton-Kennedy, E; Lange, D J

    2014-01-01

    CMS faces real challenges with upgrade of the CMS detector through 2020 and beyond. One of the challenges, from the software point of view, is managing upgrade simulations with the same software release as the 2013 scenario. We present the CMS geometry description software model, its integration with the CMS event setup and core software. The CMS geometry configuration and selection is implemented in Python. The tools collect the Python configuration fragments into a script used in CMS workflow. This flexible and automated geometry configuration allows choosing either transient or persistent version of the same scenario and specific version of the same scenario. We describe how the geometries are integrated and validated, and how we define and handle different geometry scenarios in simulation and reconstruction. We discuss how to transparently manage multiple incompatible geometries in the same software release. Several examples are shown based on current implementation assuring consistent choice of scenario conditions. The consequences and implications for multiple/different code algorithms are discussed.

  9. Pulmonary blood flow distribution measured by radionuclide computed tomography

    International Nuclear Information System (INIS)

    Maeda, H.; Itoh, H.; Ishii, Y.

    1982-01-01

    Distributions of pulmonary blood flow per unit lung volume were measured in sitting patients with a radionuclide computed tomography (RCT) by intravenously administered Tc-99m macroaggregates of human serum albumin (MAA). Four different types of distribution were distinguished, among which a group referred as type 2 had a three zonal blood flow distribution as previously reported (West and co-workers, 1964). The pulmonary arterial pressure (Pa) and the venous pressure (Pv) were determined in this group of distribution. These values showed satifactory agreements with the pulmonary artery pressure (Par) and the capillary wedged pressure (Pcw) measured by Swan-Ganz catheter in eighteen supine patients. Those good correlations enable to establish a noninvasive methodology for measurement of pulmonary vascular pressures

  10. CMS Data Analysis: Current Status and Future Strategy

    CERN Document Server

    Innocente, V

    2003-01-01

    We present the current status of CMS data analysis architecture and describe work on future Grid-based distributed analysis prototypes. CMS has two main software frameworks related to data analysis: COBRA, the main framework, and IGUANA, the interactive visualisation framework. Software using these frameworks is used today in the world-wide production and analysis of CMS data. We describe their overall design and present examples of their current use with emphasis on interactive analysis. CMS is currently developing remote analysis prototypes, including one based on Clarens, a Grid-enabled client-server tool. Use of the prototypes by CMS physicists will guide us in forming a Grid-enriched analysis strategy. The status of this work is presented, as is an outline of how we plan to leverage the power of our existing frameworks in the migration of CMS software to the Grid.

  11. Common accounting system for monitoring the ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Karavakis, E; Andreeva, J; Campana, S; Saiz, P; Gayazov, S; Jezequel, S; Sargsyan, L; Schovancova, J; Ueda, I

    2014-01-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  12. Detector Alignment Studies for the CMS Experiment

    CERN Document Server

    Lampén, Tapio

    2007-01-01

    This thesis presen ts studies related to trac k-based alignmen t for the future CMS exp erimen t at CERN. Excellen t geometric alignmen t is crucial to fully bene t from the outstanding resolution of individual sensors. The large num ber of sensors mak es it dicult in CMS to utilize computationally demanding alignmen t algorithms. A computationally ligh t alignmen t algorithm, called the Hits and Impact Points algorithm (HIP), is dev elop ed and studied. It is based on minimization of the hit residuals. It can be applied to individual sensors or to comp osite objects. All six alignmen t parameters (three translations and three rotations), or their subgroup can be considered. The algorithm is exp ected to be particularly suitable for the alignmen t of the innermost part of CMS, the pixel detector, during its early operation, but can be easily utilized to align other parts of CMS also. The HIP algorithm is applied to sim ulated CMS data and real data measured with a test-b eam setup. The sim ulation studies dem...

  13. Distributed Database Access in the LHC Computing Grid with CORAL

    CERN Document Server

    Molnár, Z; Düllmann, D; Giacomo, G; Kalkhof, A; Valassi, A; CERN. Geneva. IT Department

    2009-01-01

    The CORAL package is the LCG Persistency Framework foundation for accessing relational databases. From the start CORAL has been designed to facilitate the deployment of the LHC experiment database applications in a distributed computing environment. In particular we cover - improvements to database service scalability by client connection management - platform-independent, multi-tier scalable database access by connection multiplexing, caching - a secure authentication and authorisation scheme integrated with existing grid services. We will summarize the deployment experience from several experiment productions using the distributed database infrastructure, which is now available in LCG. Finally, we present perspectives for future developments in this area.

  14. On the computation of momentum distributions within wavepacket propagation calculations

    International Nuclear Information System (INIS)

    Feuerstein, Bernold; Thumm, Uwe

    2003-01-01

    We present a new method to extract momentum distributions from time-dependent wavepacket calculations. In contrast to established Fourier transformation of the spatial wavepacket at a fixed time, the proposed 'virtual detector' method examines the time dependence of the wavepacket at a fixed position. In first applications to the ionization of model atoms and the dissociation of H 2 + , we find a significant reduction of computing time and are able to extract reliable fragment momentum distributions by using a comparatively small spatial numerical grid for the time-dependent wavefunction

  15. Radar data processing using a distributed computational system

    Science.gov (United States)

    Mota, Gilberto F.

    1992-06-01

    This research specifies and validates a new concurrent decomposition scheme, called Confined Space Search Decomposition (CSSD), to exploit parallelism of Radar Data Processing algorithms using a Distributed Computational System. To formalize the specification, we propose and apply an object-oriented methodology called Decomposition Cost Evaluation Model (DCEM). To reduce the penalties of load imbalance, we propose a distributed dynamic load balance heuristic called Object Reincarnation (OR). To validate the research, we first compare our decomposition with an identified alternative using the proposed DCEM model and then develop a theoretical prediction of selected parameters. We also develop a simulation to check the Object Reincarnation Concept.

  16. Implementation of NASTRAN on the IBM/370 CMS operating system

    Science.gov (United States)

    Britten, S. S.; Schumacker, B.

    1980-01-01

    The NASA Structural Analysis (NASTRAN) computer program is operational on the IBM 360/370 series computers. While execution of NASTRAN has been described and implemented under the virtual storage operating systems of the IBM 370 models, the IBM 370/168 computer can also operate in a time-sharing mode under the virtual machine operating system using the Conversational Monitor System (CMS) subset. The changes required to make NASTRAN operational under the CMS operating system are described.

  17. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  18. CMS Analysis and Data Reduction with Apache Spark

    Energy Technology Data Exchange (ETDEWEB)

    Gutsche, Oliver [Fermilab; Canali, Luca [CERN; Cremer, Illia [Magnetic Corp., Waltham; Cremonesi, Matteo [Fermilab; Elmer, Peter [Princeton U.; Fisk, Ian [Flatiron Inst., New York; Girone, Maria [CERN; Jayatilaka, Bo [Fermilab; Kowalkowski, Jim [Fermilab; Khristenko, Viktor [CERN; Motesnitsalis, Evangelos [CERN; Pivarski, Jim [Princeton U.; Sehrish, Saba [Fermilab; Surdy, Kacper [CERN; Svyatkovskiy, Alexey [Princeton U.

    2017-10-31

    Experimental Particle Physics has been at the forefront of analyzing the world's largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collectively called "Big Data" technologies have emerged from industry and open source projects to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and tools, promising a fresh look at analysis of very large datasets that could potentially reduce the time-to-physics with increased interactivity. Moreover these new tools are typically actively developed by large communities, often profiting of industry resources, and under open source licensing. These factors result in a boost for adoption and maturity of the tools and for the communities supporting them, at the same time helping in reducing the cost of ownership for the end-users. In this talk, we are presenting studies of using Apache Spark for end user data analysis. We are studying the HEP analysis workflow separated into two thrusts: the reduction of centrally produced experiment datasets and the end-analysis up to the publication plot. Studying the first thrust, CMS is working together with CERN openlab and Intel on the CMS Big Data Reduction Facility. The goal is to reduce 1 PB of official CMS data to 1 TB of ntuple output for analysis. We are presenting the progress of this 2-year project with first results of scaling up Spark-based HEP analysis. Studying the second thrust, we are presenting studies on using Apache Spark for a CMS Dark Matter physics search, comparing Spark's feasibility, usability and performance to the ROOT-based analysis.

  19. 11th International Conference on Distributed Computing and Artificial Intelligence

    CERN Document Server

    Bersini, Hugues; Corchado, Juan; Rodríguez, Sara; Pawlewski, Paweł; Bucciarelli, Edgardo

    2014-01-01

    The 11th International Symposium on Distributed Computing and Artificial Intelligence 2014 (DCAI 2014) is a forum to present applications of innovative techniques for studying and solving complex problems. The exchange of ideas between scientists and technicians from both the academic and industrial sector is essential to facilitate the development of systems that can meet the ever-increasing demands of today’s society. The present edition brings together past experience, current work and promising future trends associated with distributed computing, artificial intelligence and their application in order to provide efficient solutions to real problems. This year’s technical program presents both high quality and diversity, with contributions in well-established and evolving areas of research (Algeria, Brazil, China, Croatia, Czech Republic, Denmark, France, Germany, Ireland, Italy, Japan, Malaysia, Mexico, Poland, Portugal, Republic of Korea, Spain, Taiwan, Tunisia, Ukraine, United Kingdom), representing ...

  20. The BaBar experiment's distributed computing model

    International Nuclear Information System (INIS)

    Boutigny, D.

    2001-01-01

    In order to face the expected increase in statistics between now and 2005, the BaBar experiment at SLAC is evolving its computing model toward a distributed multitier system. It is foreseen that data will be spread among Tier-A centers and deleted from the SLAC center. A uniform computing environment is being deployed in the centers, the network bandwidth is continuously increased and data distribution tools has been designed in order to reach a transfer rate of ∼100 TB of data per year. In parallel, smaller Tier-B and C sites receive subsets of data, presently in Kanga-ROOT format and later in Objectivity format. GRID tools will be used for remote job submission

  1. The BaBar Experiment's Distributed Computing Model

    International Nuclear Information System (INIS)

    Gowdy, Stephen J.

    2002-01-01

    In order to face the expected increase in statistics between now and 2005, the BaBar experiment at SLAC is evolving its computing model toward a distributed multi-tier system. It is foreseen that data will be spread among Tier-A centers and deleted from the SLAC center. A uniform computing environment is being deployed in the centers, the network bandwidth is continuously increased and data distribution tools has been designed in order to reach a transfer rate of ∼100 TB of data per year. In parallel, smaller Tier-B and C sites receive subsets of data, presently in Kanga-ROOT[1] format and later in Objectivity[2] format. GRID tools will be used for remote job submission

  2. Integrating Xgrid into the HENP distributed computing model

    International Nuclear Information System (INIS)

    Hajdu, L; Lauret, J; Kocoloski, A; Miller, M

    2008-01-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology

  3. Integrating Xgrid into the HENP distributed computing model

    Science.gov (United States)

    Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.

    2008-07-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  4. Development of distributed computer systems for future nuclear power plants

    International Nuclear Information System (INIS)

    Yan, G.; L'Archeveque, J.V.R.

    1978-01-01

    Dual computers have been used for direct digital control in CANDU power reactors since 1963. However, as reactor plants have grown in size and complexity, some drawbacks to centralized control appear such as, for example, the surprisingly large amount of cabling required for information transmission. Dramatic changes in costs of components and a desire to improve system performance have stimulated a broad-based research and development effort in distribution systems. This paper outlines work in this area

  5. Brookhaven Reactor Experiment Control Facility, a distributed function computer network

    International Nuclear Information System (INIS)

    Dimmler, D.G.; Greenlaw, N.; Kelley, M.A.; Potter, D.W.; Rankowitz, S.; Stubblefield, F.W.

    1975-11-01

    A computer network for real-time data acquisition, monitoring and control of a series of experiments at the Brookhaven High Flux Beam Reactor has been developed and has been set into routine operation. This reactor experiment control facility presently services nine neutron spectrometers and one x-ray diffractometer. Several additional experiment connections are in progress. The architecture of the facility is based on a distributed function network concept. A statement of implementation and results is presented

  6. The CMS workload management system

    Energy Technology Data Exchange (ETDEWEB)

    Cinquilli, M. [CERN; Evans, D. [Fermilab; Foulkes, S. [Fermilab; Hufnagel, D. [Fermilab; Mascheroni, M. [CERN; Norman, M. [UC, San Diego; Maxa, Z. [Caltech; Melo, A. [Vanderbilt U.; Metson, S. [Bristol U.; Riahi, H. [INFN, Perugia; Ryu, S. [Fermilab; Spiga, D. [CERN; Vaandering, E. [Fermilab; Wakefield, Stuart [Imperial Coll., London; Wilkinson, R. [Caltech

    2012-01-01

    CMS has started the process of rolling out a new workload management system. This system is currently used for reprocessing and Monte Carlo production with tests under way using it for user analysis. It was decided to combine, as much as possible, the production/processing, analysis and T0 codebases so as to reduce duplicated functionality and make best use of limited developer and testing resources. This system now includes central request submission and management (Request Manager), a task queue for parcelling up and distributing work (WorkQueue) and agents which process requests by interfacing with disparate batch and storage resources (WMAgent).

  7. The CMS workload management system

    International Nuclear Information System (INIS)

    Cinquilli, M; Mascheroni, M; Spiga, D; Evans, D; Foulkes, S; Hufnagel, D; Ryu, S; Vaandering, E; Norman, M; Maxa, Z; Wilkinson, R; Melo, A; Metson, S; Riahi, H; Wakefield, S

    2012-01-01

    CMS has started the process of rolling out a new workload management system. This system is currently used for reprocessing and Monte Carlo production with tests under way using it for user analysis. It was decided to combine, as much as possible, the production/processing, analysis and T0 codebases so as to reduce duplicated functionality and make best use of limited developer and testing resources. This system now includes central request submission and management (Request Manager); a task queue for parcelling up and distributing work (WorkQueue) and agents which process requests by interfacing with disparate batch and storage resources (WMAgent).

  8. Software Quality Measurement for Distributed Systems. Volume 3. Distributed Computing Systems: Impact on Software Quality.

    Science.gov (United States)

    1983-07-01

    Distributed Computing Systems impact DrnwrR - aehR on Sotwar Quaity. PERFORMING 010. REPORT NUMBER 7. AUTNOW) S. CONTRACT OR GRANT "UMBER(*)IS ThomasY...C31 Application", "Space Systems Network", "Need for Distributed Database Management", and "Adaptive Routing". This is discussed in the last para ...data reduction, buffering, encryption, and error detection and correction functions. Examples of such data streams include imagery data, video

  9. A compositional reservoir simulator on distributed memory parallel computers

    International Nuclear Information System (INIS)

    Rame, M.; Delshad, M.

    1995-01-01

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. A portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented

  10. Distributed user interfaces for clinical ubiquitous computing applications.

    Science.gov (United States)

    Bång, Magnus; Larsson, Anders; Berglund, Erik; Eriksson, Henrik

    2005-08-01

    Ubiquitous computing with multiple interaction devices requires new interface models that support user-specific modifications to applications and facilitate the fast development of active workspaces. We have developed NOSTOS, a computer-augmented work environment for clinical personnel to explore new user interface paradigms for ubiquitous computing. NOSTOS uses several devices such as digital pens, an active desk, and walk-up displays that allow the system to track documents and activities in the workplace. We present the distributed user interface (DUI) model that allows standalone applications to distribute their user interface components to several devices dynamically at run-time. This mechanism permit clinicians to develop their own user interfaces and forms to clinical information systems to match their specific needs. We discuss the underlying technical concepts of DUIs and show how service discovery, component distribution, events and layout management are dealt with in the NOSTOS system. Our results suggest that DUIs--and similar network-based user interfaces--will be a prerequisite of future mobile user interfaces and essential to develop clinical multi-device environments.

  11. Computationally intensive econometrics using a distributed matrix-programming language.

    Science.gov (United States)

    Doornik, Jurgen A; Hendry, David F; Shephard, Neil

    2002-06-15

    This paper reviews the need for powerful computing facilities in econometrics, focusing on concrete problems which arise in financial economics and in macroeconomics. We argue that the profession is being held back by the lack of easy-to-use generic software which is able to exploit the availability of cheap clusters of distributed computers. Our response is to extend, in a number of directions, the well-known matrix-programming interpreted language Ox developed by the first author. We note three possible levels of extensions: (i) Ox with parallelization explicit in the Ox code; (ii) Ox with a parallelized run-time library; and (iii) Ox with a parallelized interpreter. This paper studies and implements the first case, emphasizing the need for deterministic computing in science. We give examples in the context of financial economics and time-series modelling.

  12. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel

    2012-06-01

    Understanding the influence of multiple parameters in a complex simulation setting is a difficult task. In the ideal case, the scientist can freely steer such a simulation and is immediately presented with the results for a certain configuration of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute for the time-consuming simulation. The surrogate model we propose is based on the sparse grid technique, and we identify the main computational tasks associated with its evaluation and its extension. We further show how distributed data management combined with the specific use of accelerators allows us to approximate and deliver simulation results to a high-resolution visualization system in real-time. This significantly enhances the steering workflow and facilitates the interactive exploration of large datasets. © 2012 IEEE.

  13. CMS (Compact Muon Solenoid)

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The milestone workshops on LHC experiments in Aachen in 1990 and at Evian in 1992 provided the first sketches of how LHC detectors might look. The concept of a compact general-purpose LHC experiment based on a solenoid to provide the magnetic field was first discussed at Aachen, and the formal Expression of Interest was aired at Evian. It was here that the Compact Muon Solenoid (CMS) name first became public. Optimizing first the muon detection system is a natural starting point for a high luminosity (interaction rate) proton-proton collider experiment. The compact CMS design called for a strong magnetic field, of some 4 Tesla, using a superconducting solenoid, originally about 14 metres long and 6 metres bore. (By LHC standards, this warrants the adjective 'compact'.) The main design goals of CMS are: 1 - a very good muon system providing many possibilities for momentum measurement (physicists call this a 'highly redundant' system); 2 - the best possible electromagnetic calorimeter consistent with the above; 3 - high quality central tracking to achieve both the above; and 4 - an affordable detector. Overall, CMS aims to detect cleanly the diverse signatures of new physics by identifying and precisely measuring muons, electrons and photons over a large energy range at very high collision rates, while also exploiting the lower luminosity initial running. As well as proton-proton collisions, CMS will also be able to look at the muons emerging from LHC heavy ion beam collisions. The Evian CMS conceptual design foresaw the full calorimetry inside the solenoid, with emphasis on precision electromagnetic calorimetry for picking up photons. (A light Higgs particle will probably be seen via its decay into photon pairs.) The muon system now foresaw four stations. Inner tracking would use silicon microstrips and microstrip gas chambers, with over 10 7 channels offering high track finding efficiency. In the central CMS barrel, the tracking elements are

  14. Distributed computing testbed for a remote experimental environment

    International Nuclear Information System (INIS)

    Butner, D.N.; Casper, T.A.; Howard, B.C.; Henline, P.A.; Davis, S.L.; Barnes, D.

    1995-01-01

    Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ''Collaboratory.'' The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on the DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation's Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility

  15. Use of glide-ins in CMS for production and analysis

    International Nuclear Information System (INIS)

    Bradley, D; Gutsche, O; Holzman, B; Sfiligoi, I; Vaandering, E; Hahn, K; Padhi, S; Pi, H; Wuerthwein, F; Spiga, D

    2010-01-01

    With the evolution of various grid federations, the Condor glide-ins represent a key feature in providing a homogeneous pool of resources using late-binding technology. The CMS collaboration uses the glide-in based Workload Management System, glideinWMS, for production (ProdAgent) and distributed analysis (CRAB) of the data. The Condor glide-in daemons traverse to the worker nodes, submitted via Condor-G. Once activated, they preserve the Master-Worker relationships, with the worker first validating the execution environment on the worker node before pulling the jobs sequentially until the expiry of their lifetimes. The combination of late-binding and validation significantly reduces the overall failure rate visible to CMS physicists. We discuss the extensive use of the glideinWMS since the computing challenge, CCRC-08, in order to prepare for the forthcoming LHC data-taking period. The key features essential to the success of large-scale production and analysis on CMS resources across major grid federations, including EGEE, OSG and NorduGrid are outlined. Use of glide-ins via the CRAB server mechanism and ProdAgent, as well as first hand experience of using the next generation CREAM computing element within the CMS framework is discussed.

  16. Evacuation drill at CMS

    CERN Multimedia

    Niels Dupont-Sagorin and Christoph Schaefer

    2012-01-01

    Training personnel, including evacuation guides and shifters, checking procedures, improving collaboration with the CERN Fire Brigade: the first real-life evacuation drill at CMS took place on Friday 3 February from 12p.m. to 3p.m. in the two caverns located at Point 5 of the LHC.   CERN personnel during the evacuation drill at CMS. Evacuation drills are required by law and have to be organized periodically in all areas of CERN, both above and below ground. The last drill at CMS, which took place in June 2007, revealed some desiderata, most notably the need for a public address system. With this equipment in place, it is now possible to broadcast audio messages from the CMS control room to the underground areas.   The CMS Technical Coordination Team and the GLIMOS have focused particularly on preparing collaborators for emergency situations by providing training and organizing regular safety drills with the HSE Unit and the CERN Fire Brigade. This Friday, the practical traini...

  17. CMS Thesis Award

    CERN Multimedia

    2004-01-01

    The 2003 CMS thesis award was presented to Riccardo Ranieri on 15 March for his Ph.D. thesis "Trigger Selection of WH → μ ν b bbar with CMS" where 'WH → μ ν b bbar' represents the associated production of the W boson and the Higgs boson and their subsequent decays. Riccardo received his Ph.D. from the University of Florence and was supervised by Carlo Civinini. In total nine thesis were nominated for the award, which was judged on originality, impact within the field of high energy physics, impact within CMS and clarity of writing. Gregory Snow, secretary of the awarding committee, explains why Riccardo's thesis was chosen, ‘‘The search for the Higgs boson is one of the main physics goals of CMS. Riccardo's thesis helps the experiment to formulate the strategy which will be used in that search.'' Lorenzo Foà, Chairperson of the CMS Collaboration Board, presented Riccardo with an commemorative engraved plaque. He will also receive the opportunity to...

  18. Successful initiation of and management through a distributed computer upgrade

    International Nuclear Information System (INIS)

    Barich, F.T.; Crawford, T.H.

    1995-01-01

    Processing capacity, the lack of data analysis tools, obsolescence, and spare parts issues are forcing utilities to upgrade or replace their plant computer systems with newer, larger systems. As a result, the utility faces an increasing number of new technologies, such as fiber optics and communication standards (FDDI, ATM, etc.), Graphic User Interface using X-Windows, and distributed architectures that eliminate the host based computer. Technologies such as these, if properly applied, can greatly enhance the capabilities and functions of the existing system. Besides this, the utility also faces funtionality previously not available through the plant computer, such as integrated plant monitoring and digital controls, voice, imaging, etc. With computing technology vastly changing from traditional host systems, the utility confronts the question, open-quotes what are my needs (now and for the future), and what new system can meet those needs most effectively?close quotes. This paper describes the management process necessary to define the needs and then carry out a successful computer replacement project

  19. An ATLAS distributed computing architecture for HL-LHC

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2017-01-01

    The ATLAS collaboration started a process to understand the computing needs for the High Luminosity LHC era. Based on our best understanding of the computing model input parameters for the HL-LHC data taking conditions, results indicate the need for a larger amount of computational and storage resources with respect of the projection of constant yearly budget for computing in 2026. Filling the gap between the projection and the needs will be one of the challenges in preparation for LHC Run-4. While the gains from improvements in offline software will play a crucial role in this process, a different model for data processing, management, access and bookkeeping should also be envisaged to optimise resource usage. In this contribution we will describe a straw man of this model, founded on basic principles such as single event level granularity for data processing and virtual data. We will explain how the current architecture will evolve adiabatically into the future distributed computing system, through the prot...

  20. Upgrade of the CMS Event Builder

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The Data Acquisition (DAQ) system of the Compact Muon Solenoid (CMS) experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s. By the time the LHC restarts after the 2013/14 shut-down, the current compute nodes and networking infrastructure will have reached the end of their lifetime. We are presenting design studies for an upgrade of the CMS event builder based on advanced networking technologies such as 10 Gb/s Ethernet. We report on tests and performance measurements with small-scale test setups.

  1. 11 March 2009 - Italian Minister of Education, University and Research M. Gelmini, visiting ATLAS and CMS underground experimental areas and LHC tunnel with Director for Research and Scientific Computing S. Bertolucci. Signature of the guest book with CERN Director-General R. Heuer and S. Bertolucci at CMS Point 5.

    CERN Multimedia

    Maximilien Brice

    2009-01-01

    Members of the Ministerial delegation: Cons. Amb. Sebastiano FULCI, Consigliere Diplomatico Dott.ssa Elisa GREGORINI, Segretario Particolare del Ministro Dott. Massimo ZENNARO, Responsabile rapporti con la stampa Prof. Roberto PETRONZIO, Presidente dell’INFN (Istituto Nazionale di Fisica Nucleare) Dott. Luciano CRISCUOLI, Direttore Generale della Ricerca, MIUR Dott. Andrea MARINONI, Consulente scientifico del Ministro CERN delegation present throughout the programme: Prof. Sergio Bertolucci, Director for Research and Scientific Computing Prof. Fabiola Gianotti, ATLAS Collaboration Spokesperson Prof. Paolo Giubellino, ALICE Deputy Spokesperson, Universita & INFN, Torino Prof. Guido Tonelli, CMS Collaboration Deputy Spokesperson, INFN Pisa Dr Monica Pepe-Altarelli, LHCb Collaboration CERN Team Leader Guests in the ATLAS exhibition area: Dr Marcello Givoletti\tPresident of CAEN Dr Davide Malacalza\tPresident of ASG Ansaldo Superconductors and users: Prof. Clara Matteuzzi, LHCb Collaboration, Universita' d...

  2. Prototype for a generic thin-client remote analysis environment for CMS

    International Nuclear Information System (INIS)

    Steenberg, C.D.; Bunn, J.J.; Hickey, T.M.; Holtman, K.; Legrand, I.; Litvin, V.; Newman, H.B.; Samar, A.; Singh, S.; Wilkinson, R.

    2001-01-01

    The multi-tiered architecture of the highly-distributed CMS computing systems necessitates a flexible data distribution and analysis environment. The authors describe a prototype analysis environment which functions efficiently over wide area networks using a server installed at the Caltech/UCSD Tier 2 prototype to analyze CMS data stored at various locations using a thin client. The analysis environment is based on existing HEP (Anaphe) and CMS (CARF, ORCA, IGUANA) software technology on the server accessed from a variety of clients. A Java Analysis Studio (JAS, from SLAC) plug-in is being developed as a reference client. The server is operated as a 'black box' on the proto-Tier2 system. ORCA objectivity databases (e.g. an existing large CMS Muon sample) are hosted on the master and slave nodes, and remote clients can request processing of queries across the server nodes, and get the histogram results returned and rendered in the client. The server is implemented using pure C++, and use XML-RPC as a language-neutral transport. This has several benefits, including much better scalability, better integration with CARF-ORCA, and importantly, makes the work directly useful to other non-Java general-purpose analysis and presentation tools such as Hippodraw, Lizard, or ROOT

  3. The Latest from CMS

    CERN Multimedia

    2009-01-01

    CMS is on track to be ready for physics one month in advance of the LHC restart. The final installations are being completed and tests are being run to ensure that the experiment is as well prepared as possible to exploit sustained LHC operation throughout 2010. Physics week in Bologna, Italy, was a valuable time for CMS collaborators to discuss preparations for numerous physics analyses, as well as the performance of the detector during the recent data-taking period with cosmics (CRAFT 09). During this five-week exercise, more than 300 million cosmic events were recorded with the magnetic field on. This large data-set is being used to further improve the sub-detector alignment, calibration and performance whilst awaiting p-p collisions. Meanwhile, in the experimental cavern, Wolfram Zeuner, Deputy Technical Coordinator of CMS, reports "We are now very nearly closed up again. We are just doing the final clean-up work and are ready t...

  4. Track reconstruction in CMS high luminosity environment

    CERN Document Server

    AUTHOR|(CDS)2067159

    2016-01-01

    The CMS tracker is the largest silicon detector ever built, covering 200 square meters and providing an average of 14 high-precision measurements per track. Tracking is essential for the reconstruction of objects like jets, muons, electrons and tau leptons starting from the raw data from the silicon pixel and strip detectors. Track reconstruction is widely used also at trigger level as it improves objects tagging and resolution.The CMS tracking code is organized in several levels, known as iterative steps, each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and cleaning. Each subsequent step works on hits not yet associated to a reconstructed particle trajectory.The CMS tracking code is continuously evolving to make the reconstruction computing load compat...

  5. Track reconstruction in CMS high luminosity environment

    CERN Document Server

    Goetzmann, Christophe

    2014-01-01

    The CMS tracker is the largest silicon detector ever built, covering 200 square meters and providing an average of 14 high-precision measurements per track. Tracking is essential for the reconstruction of objects like jets, muons, electrons and tau leptons starting from the raw data from the silicon pixel and strip detectors. Track reconstruction is widely used also at trigger level as it improves objects tagging and resolution.The CMS tracking code is organized in several levels, known as iterative steps, each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and cleaning. Each subsequent step works on hits not yet associated to a reconstructed particle trajectory.The CMS tracking code is continuously evolving to make the reconstruction computing load compat...

  6. Fast Performance Computing Model for Smart Distributed Power Systems

    Directory of Open Access Journals (Sweden)

    Umair Younas

    2017-06-01

    Full Text Available Plug-in Electric Vehicles (PEVs are becoming the more prominent solution compared to fossil fuels cars technology due to its significant role in Greenhouse Gas (GHG reduction, flexible storage, and ancillary service provision as a Distributed Generation (DG resource in Vehicle to Grid (V2G regulation mode. However, large-scale penetration of PEVs and growing demand of energy intensive Data Centers (DCs brings undesirable higher load peaks in electricity demand hence, impose supply-demand imbalance and threaten the reliability of wholesale and retail power market. In order to overcome the aforementioned challenges, the proposed research considers smart Distributed Power System (DPS comprising conventional sources, renewable energy, V2G regulation, and flexible storage energy resources. Moreover, price and incentive based Demand Response (DR programs are implemented to sustain the balance between net demand and available generating resources in the DPS. In addition, we adapted a novel strategy to implement the computational intensive jobs of the proposed DPS model including incoming load profiles, V2G regulation, battery State of Charge (SOC indication, and fast computation in decision based automated DR algorithm using Fast Performance Computing resources of DCs. In response, DPS provide economical and stable power to DCs under strict power quality constraints. Finally, the improved results are verified using case study of ISO California integrated with hybrid generation.

  7. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    Science.gov (United States)

    Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration

    2014-06-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.

  8. Physics with CMS and Electronic Upgrades

    Energy Technology Data Exchange (ETDEWEB)

    Rohlf, James W. [Boston Univ., MA (United States)

    2016-08-01

    The current funding is for continued work on the Compact Muon Solenoid (CMS) at the CERN Large Hadron Collider (LHC) as part of the Energy Frontier experimental program. The current budget year covers the first year of physics running at 13 TeV (Run 2). During this period we have concentrated on commisioning of the μTCA electronics, a new standard for distribution of CMS trigger and timing control signals and high bandwidth data aquistiion as well as participating in Run 2 physics.

  9. Model of CMS Tracker

    CERN Multimedia

    Breuker

    1999-01-01

    A full scale CMS tracker mock-up exposed temporarily in the hall of building 40. The purpose of the mock-up is to study the routing of services, assembly and installation. The people in front are only a small fraction of the CMS tracker collaboration. Left to right : M. Atac, R. Castaldi, H. Breuker, D. Pandoulas,P. Petagna, A. Caner, A. Carraro, H. Postema, M. Oriunno, S. da Mota Silva, L. Van Lancker, W. Glessing, G. Benefice, A. Onnela, M. Gaspar, G. M. Bilei

  10. Automating the CMS DAQ

    International Nuclear Information System (INIS)

    Bauer, G; Darlea, G-L; Gomez-Ceballos, G; Bawej, T; Chaze, O; Coarasa, J A; Deldicque, C; Dobson, M; Dupont, A; Gigi, D; Glege, F; Gomez-Reino, R; Hartl, C; Hegeman, J; Masetti, L; Behrens, U; Branson, J; Cittolin, S; Holzner, A; Erhan, S

    2014-01-01

    We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.

  11. CMS Comic Book

    CERN Document Server

    Gill, Karl Aaron

    2006-01-01

    Titled "CMS Particle Hunter," this colorful comic book style brochure explains to young budding scientists and science enthusiasts in colorful animation how the CMS detector was made, its main parts, and what scientists hope to find using this complex tool. Book invites young students to get involved in particle physics themselves to join the adventure. Written by Dave Barney and Aline Guevera. Layout and drawings by Eric Paiharey and Frederic Vignaux. Available in English, French, German, Italian, Spanish and Portuguese. Year Produced: 2006. Update: September 2013.

  12. Increasing efficiency of job execution with resource co-allocation in distributed computer systems

    OpenAIRE

    Cankar, Matija

    2014-01-01

    The field of distributed computer systems, while not new in computer science, is still the subject of a lot of interest in both industry and academia. More powerful computers, faster and more ubiquitous networks, and complex distributed applications are accelerating the growth of distributed computing. Large numbers of computers interconnected in a single network provide additional computing power to users whenever required. Such systems are, however, expensive and complex to manage, which ca...

  13. Integrating Xgrid into the HENP distributed computing model

    Energy Technology Data Exchange (ETDEWEB)

    Hajdu, L; Lauret, J [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kocoloski, A; Miller, M [Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)], E-mail: kocolosk@mit.edu

    2008-07-15

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  14. Computational Intelligence based techniques for islanding detection of distributed generation in distribution network: A review

    International Nuclear Information System (INIS)

    Laghari, J.A.; Mokhlis, H.; Karimi, M.; Bakar, A.H.A.; Mohamad, Hasmaini

    2014-01-01

    Highlights: • Unintentional and intentional islanding, their causes, and solutions are presented. • Remote, passive, active and hybrid islanding detection techniques are discussed. • The limitation of these techniques in accurately detect islanding are discussed. • Computational intelligence techniques ability in detecting islanding is discussed. • Review of ANN, fuzzy logic control, ANFIS, Decision tree techniques is provided. - Abstract: Accurate and fast islanding detection of distributed generation is highly important for its successful operation in distribution networks. Up to now, various islanding detection technique based on communication, passive, active and hybrid methods have been proposed. However, each technique suffers from certain demerits that cause inaccuracies in islanding detection. Computational intelligence based techniques, due to their robustness and flexibility in dealing with complex nonlinear systems, is an option that might solve this problem. This paper aims to provide a comprehensive review of computational intelligence based techniques applied for islanding detection of distributed generation. Moreover, the paper compares the accuracies of computational intelligence based techniques over existing techniques to provide a handful of information for industries and utility researchers to determine the best method for their respective system

  15. Computational scheme for transient temperature distribution in PWR vessel wall

    International Nuclear Information System (INIS)

    Dedovic, S.; Ristic, P.

    1980-01-01

    Computer code TEMPNES is a part of joint effort made in Gosa Industries in achieving the technique for structural analysis of heavy pressure vessels. Transient heat conduction problems analysis is based on finite element discretization of structures non-linear transient matrix formulation and time integration scheme as developed by Wilson (step-by-step procedure). Convection boundary conditions and the effect of heat generation due to radioactive radiation are both considered. The computation of transient temperature distributions in reactor vessel wall when the water temperature suddenly drops as a consequence of reactor cooling pump failure is presented. The vessel is treated as as axisymmetric body of revolution. The program has two finite time element options a) fixed predetermined increment and; b) an automatically optimized time increment for each step dependent on the rate of change of the nodal temperatures. (author)

  16. Implementation of a model-independent search for new physics with the CMS detector exploiting the world-wide LHC Computing Grid

    CERN Document Server

    Hof, Carsten

    With this year's start of CERN's Large Hadron Collider (LHC) it will be possible for the first time to directly probe the physics at the TeV-scale at a collider experiment. At this scale the Standard Model of particle physics will reach its limits and new physical phenomena are expected to appear. This study performed with one of the LHC's experiments, namely the Compact Muon Solenoid (CMS), is trying to quantify the understanding of the Standard Model and is hunting for deviations from the expectation by investigating a large fraction of the CMS data. While the classical approach for searches of physics beyond the Standard Model assumes a specific theoretical model and tries to isolate events with a certain signature characteristic for the new theory, this thesis follows a model-independent approach. The method relies only on the knowledge of the Standard Model and is suitable to spot deviations from this model induced by particular theoretical models but also theories not yet thought of. Future data are to ...

  17. Storm blueprints patterns for distributed real-time computation

    CERN Document Server

    Goetz, P Taylor

    2014-01-01

    A blueprints book with 10 different projects built in 10 different chapters which demonstrate the various use cases of storm for both beginner and intermediate users, grounded in real-world example applications.Although the book focuses primarily on Java development with Storm, the patterns are more broadly applicable and the tips, techniques, and approaches described in the book apply to architects, developers, and operations.Additionally, the book should provoke and inspire applications of distributed computing to other industries and domains. Hadoop enthusiasts will also find this book a go

  18. Scalable error correction in distributed ion trap computers

    International Nuclear Information System (INIS)

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-01-01

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment

  19. Job monitoring on DIRAC for Belle II distributed computing

    Science.gov (United States)

    Kato, Yuji; Hayasaka, Kiyoshi; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo

    2015-12-01

    We developed a monitoring system for Belle II distributed computing, which consists of active and passive methods. In this paper we describe the passive monitoring system, where information stored in the DIRAC database is processed and visualized. We divide the DIRAC workload management flow into steps and store characteristic variables which indicate issues. These variables are chosen carefully based on our experiences, then visualized. As a result, we are able to effectively detect issues. Finally, we discuss the future development for automating log analysis, notification of issues, and disabling problematic sites.

  20. Enabling Computational Dynamics in Distributed Computing Environments Using a Heterogeneous Computing Template

    Science.gov (United States)

    2011-08-09

    heterogeneous computing concept advertised recently as the paradigm capable of delivering exascale flop rates by the end of the decade. In this framework...and Lamb. Page 10 of 10 UNCLASSIFIED [3] Skaugen, K., Petascale to Exascale : Extending Intel’s HPC Commitment: http://download.intel.com

  1. CERN Researchers' Night @ CMS + TOTEM

    CERN Multimedia

    Hoch, Michael

    2011-01-01

    Young researchers' shifter training at CMS; • Introduction talk with discussion, • CMS control room shadowing the shifters • TOTEM control room introduction and discusson • Scientific poster work shop and presentation • Science Art installations ‘Faces of CMS’ & ‘Science Cloud’ • CMS Shift diploma presentation

  2. Final descent for CMS

    CERN Multimedia

    The 15th and last section of the CMS detector was lowered on Tuesday 22 January. The YE-1 endcap (1430 tonnes) began its 100-metre descent at 7 am and arrived gently on the floor of the experiment hall at 5.30 pm.

  3. Exclusive Production at CMS

    CERN Document Server

    Walczak, Marek

    2016-01-01

    I briefly introduce so-called central exclusive production. I mainly focus on the example analyses that have been performed in the CMS experiment at CERN. I conclude with ideas and perspectives for future work that will be done during Run 2 of the LHC. I pay special attention to the ultraperipheral collisions.

  4. Exotica in CMS

    CERN Document Server

    AUTHOR|(CDS)2072123

    2015-01-01

    Selected results on exotica searches with the CMS detector are presented. The main topics are dark matter, boosted objects, long-lived particles and classic narrow resonance searches. Most of the analyses were performed with data recorded at at centre-of-mass energy of 8 TeV, but first results obtained at 13 TeV are also shown.

  5. CMS SEES FIRST COLLISIONS

    CERN Multimedia

      A very special moment.  On 23rd November, 19:40 we recorded our first collisions with 450GeV beams well centred in CMS.   If you have any comments / suggestions please contact Karl Aaron GILL (Editor)

  6. New Management for CMS

    CERN Document Server

    CERN Bulletin

    2010-01-01

    As of January 2010, Guido Tonelli becomes the new CMS Spokesperson with a two-year term of office. A Professor of General Physics at the University of Pisa, Italy, and a CERN Staff Member since January 2010, Tonelli had already been appointed as Deputy Spokesperson under the previous management. He has taken over from Jim Virdee, who was CMS Spokesperson from January 2007 to December 2009. Guido Tonelli, new CMS spokesperson At the same time as Tonelli becomes Spokesperson, two new Deputies, Albert De Roeck and Joe Incandela, as well as a whole new set of Coordinators, are also starting their terms of office. ”With the first data-taking run we have shown that CMS is an excellent experiment. The next challenge will be to transform CMS into a discovery machine with a view to making it synonymous with scientific excellence. This will be very tough but, again, the winning element will be the focus and coherent effort of the whole collaboration. On my side I'll do my best but I will need...

  7. CMS Achieves New Milestone

    CERN Multimedia

    2012-01-01

    In a year highlighted by the discovery of a new, Higgs-like boson, we must remember that CMS has had a tremendous year overall, with many physics results that have pushed our envelope of knowledge further. As of this week, we have published 200 papers. Congratulations to everyone involved!

  8. Standard Model Higgs decay for two Photons in CMS

    CERN Multimedia

    Daniel Denegri

    2000-01-01

    Simulated two-photon mass distribution for SM Higgs and expected background in the CMS PbW04 crystal calorimeter for an integrated luminosity of 10 . 5 pb-1, with detailed simulation of calorimeter response.

  9. KeyWare: an open wireless distributed computing environment

    Science.gov (United States)

    Shpantzer, Isaac; Schoenfeld, Larry; Grindahl, Merv; Kelman, Vladimir

    1995-12-01

    Deployment of distributed applications in the wireless domain lack equivalent tools, methodologies, architectures, and network management that exist in LAN based applications. A wireless distributed computing environment (KeyWareTM) based on intelligent agents within a multiple client multiple server scheme was developed to resolve this problem. KeyWare renders concurrent application services to wireline and wireless client nodes encapsulated in multiple paradigms such as message delivery, database access, e-mail, and file transfer. These services and paradigms are optimized to cope with temporal and spatial radio coverage, high latency, limited throughput and transmission costs. A unified network management paradigm for both wireless and wireline facilitates seamless extensions of LAN- based management tools to include wireless nodes. A set of object oriented tools and methodologies enables direct asynchronous invocation of agent-based services supplemented by tool-sets matched to supported KeyWare paradigms. The open architecture embodiment of KeyWare enables a wide selection of client node computing platforms, operating systems, transport protocols, radio modems and infrastructures while maintaining application portability.

  10. DISTRIBUTED GENERATION OF COMPUTER MUSIC IN THE INTERNET OF THINGS

    Directory of Open Access Journals (Sweden)

    G. G. Rogozinsky

    2015-07-01

    Full Text Available Problem Statement. The paper deals with distributed intelligent multi-agent system for computer music generation. A mathematical model for data extraction from the environment and their application in the music generation process is proposed. Methods. We use Resource Description Framework for representation of timbre data. A special musical programming language Csound is used for subsystem of synthesis and sound processing. Sound generation occurs according to the parameters of compositional model, getting data from the outworld. Results. We propose architecture of a potential distributed system for computer music generation. An example of core sound synthesis is presented. We also propose a method for mapping real world parameters to the plane of compositional model, in an attempt to imitate elements and aspects of creative inspiration. Music generation system has been represented as an artifact in the Central Museum of Communication n.a. A.S. Popov in the framework of «Night of Museums» action. In the course of public experiment it was stated that, in the whole, the system tends to a quick settling of neutral state with no musical events generation. This proves the necessity of algorithms design for active condition support of agents’ network, in the whole. Practical Relevance. Realization of the proposed system will give the possibility for creation of a technological platform for a whole new class of applications, including augmented acoustic reality and algorithmic composition.

  11. A uniform approach for programming distributed heterogeneous computing systems.

    Science.gov (United States)

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-12-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.

  12. SiteDB: Marshalling people and resources available to CMS

    Energy Technology Data Exchange (ETDEWEB)

    Metson, S [H.H. Wills Physics Laboratory, Bristol (United Kingdom); Bonacorsi, D [University of Bologna and INFN Bologna (Italy); Ferreira, M Dias [SPRACE (Brazil); Egeland, R [University of Minnesota, Twin Cities (United States)

    2010-04-01

    In a collaboration the size of CMS (approx. 3000 users, and almost 100 computing centres of varying size) communication and accurate information about the sites it has access to is vital in co-ordinating the multitude of computing tasks required for smooth running. SiteDB is a tool developed by CMS to track sites available to the collaboration, the allocation to CMS of resources available at those sites and the associations between CMS members and the sites (as either a manager/operator of the site or a member of a group associated to the site). It is used to track the roles a person has for an associated site or group. SiteDB eases the coordination load for the operations teams by providing a consistent interface to manage communication with the people working at a site, by identifying who is responsible for a given task or service at a site and by offering a uniform interface to information on CMS contacts and sites. SiteDB provides api's and reports for other CMS tools to use to access the information it contains, for instance enabling CRAB to use 'user friendly' names when black/white listing CE's, providing role based authentication and authorisation for other web based services and populating various troubleshooting squads in external ticketing systems in use daily by CMS Computing operations.

  13. SiteDB: Marshalling people and resources available to CMS

    International Nuclear Information System (INIS)

    Metson, S; Bonacorsi, D; Ferreira, M Dias; Egeland, R

    2010-01-01

    In a collaboration the size of CMS (approx. 3000 users, and almost 100 computing centres of varying size) communication and accurate information about the sites it has access to is vital in co-ordinating the multitude of computing tasks required for smooth running. SiteDB is a tool developed by CMS to track sites available to the collaboration, the allocation to CMS of resources available at those sites and the associations between CMS members and the sites (as either a manager/operator of the site or a member of a group associated to the site). It is used to track the roles a person has for an associated site or group. SiteDB eases the coordination load for the operations teams by providing a consistent interface to manage communication with the people working at a site, by identifying who is responsible for a given task or service at a site and by offering a uniform interface to information on CMS contacts and sites. SiteDB provides api's and reports for other CMS tools to use to access the information it contains, for instance enabling CRAB to use 'user friendly' names when black/white listing CE's, providing role based authentication and authorisation for other web based services and populating various troubleshooting squads in external ticketing systems in use daily by CMS Computing operations.

  14. Distributing the computation in combinatorial optimization experiments over the cloud

    Directory of Open Access Journals (Sweden)

    Mario Brcic

    2017-12-01

    Full Text Available Combinatorial optimization is an area of great importance since many of the real-world problems have discrete parameters which are part of the objective function to be optimized. Development of combinatorial optimization algorithms is guided by the empirical study of the candidate ideas and their performance over a wide range of settings or scenarios to infer general conclusions. Number of scenarios can be overwhelming, especially when modeling uncertainty in some of the problem’s parameters. Since the process is also iterative and many ideas and hypotheses may be tested, execution time of each experiment has an important role in the efficiency and successfulness. Structure of such experiments allows for significant execution time improvement by distributing the computation. We focus on the cloud computing as a cost-efficient solution in these circumstances. In this paper we present a system for validating and comparing stochastic combinatorial optimization algorithms. The system also deals with selection of the optimal settings for computational nodes and number of nodes in terms of performance-cost tradeoff. We present applications of the system on a new class of project scheduling problem. We show that we can optimize the selection over cloud service providers as one of the settings and, according to the model, it resulted in a substantial cost-savings while meeting the deadline.

  15. Integration of End-User Cloud Storage for CMS Analysis

    CERN Document Server

    Riahi, Hassen; Álvarez Ayllón, Alejandro; Balcas, Justas; Ciangottini, Diego; Hernández, José M; Keeble, Oliver; Magini, Nicolò; Manzi, Andrea; Mascetti, Luca; Mascheroni, Marco; Tanasijczuk, Andres Jorge; Vaandering, Eric Wayne

    2018-01-01

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achieve results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with...

  16. CMS General Poster 2009 : to raise awareness of CMS, the CMS detector, its parts and people

    CERN Multimedia

    CMS outreach

    2012-01-01

    A poster which is identical to the two inside pages of the CMS brochure. The poster contains an image of a cross section of the CMS detector, explanation of detector parts, the aims of the CMS experiment and numbers of scientists and institutions associated with the experiment.

  17. Pervasive Computing, Privacy and Distribution of the Self

    Directory of Open Access Journals (Sweden)

    Soraj Hongladarom

    2011-05-01

    Full Text Available The emergence of what is commonly known as “ambient intelligence” or “ubiquitous computing” means that our conception of privacy and trust needs to be reconsidered. Many have voiced their concerns about the threat to privacy and the more prominent role of trust that have been brought about by emerging technologies. In this paper, I will present an investigation of what this means for the self and identity in our ambient intelligence environment. Since information about oneself can be actively distributed and processed, it is proposed that in a significant sense it is the self itself that is distributed throughout a pervasive or ubiquitous computing network when information pertaining to the self of the individual travels through the network. Hence privacy protection needs to be extended to all types of information distributed. It is also recommended that appropriately strong legislation on privacy and data protection regarding this pervasive network is necessary, but at present not sufficient, to ensure public trust. What is needed is a campaign on public awareness and positive perception of the technology.

  18. 42 CFR 405.874 - Appeals of CMS or a CMS contractor.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Appeals of CMS or a CMS contractor. 405.874 Section... Part B Program § 405.874 Appeals of CMS or a CMS contractor. A CMS contractor's (that is, a carrier... supplier enrollment application. If CMS or a CMS contractor denies a provider's or supplier's enrollment...

  19. Classification of bacterial contamination using image processing and distributed computing.

    Science.gov (United States)

    Ahmed, W M; Bayraktar, B; Bhunia, A; Hirleman, E D; Robinson, J P; Rajwa, B

    2013-01-01

    Disease outbreaks due to contaminated food are a major concern not only for the food-processing industry but also for the public at large. Techniques for automated detection and classification of microorganisms can be a great help in preventing outbreaks and maintaining the safety of the nations food supply. Identification and classification of foodborne pathogens using colony scatter patterns is a promising new label-free technique that utilizes image-analysis and machine-learning tools. However, the feature-extraction tools employed for this approach are computationally complex, and choosing the right combination of scatter-related features requires extensive testing with different feature combinations. In the presented work we used computer clusters to speed up the feature-extraction process, which enables us to analyze the contribution of different scatter-based features to the overall classification accuracy. A set of 1000 scatter patterns representing ten different bacterial strains was used. Zernike and Chebyshev moments as well as Haralick texture features were computed from the available light-scatter patterns. The most promising features were first selected using Fishers discriminant analysis, and subsequently a support-vector-machine (SVM) classifier with a linear kernel was used. With extensive testing we were able to identify a small subset of features that produced the desired results in terms of classification accuracy and execution speed. The use of distributed computing for scatter-pattern analysis, feature extraction, and selection provides a feasible mechanism for large-scale deployment of a light scatter-based approach to bacterial classification.

  20. CMS tier structure and operation of the experiment-specific tasks in Germany

    International Nuclear Information System (INIS)

    Nowack, A

    2008-01-01

    In Germany, several university institutes and research centres take part in the CMS experiment. Concerning the data analysis, a couple of computing centres at different Tier levels, ranging from Tier 1 to Tier 3, exists at these places. The German Tier 1 centre GridKa at the research centre at Karlsruhe serves all four LHC experiments as well as four non-LHC experiments. With respect to the CMS experiment, GridKa is mainly involved in central tasks. The Tier 2 centre in Germany consists of two sites, one at the research centre DESY at Hamburg and one at RWTH Aachen University, forming a federated Tier 2 centre. Both parts cover different aspects of a Tier 2 centre. The German Tier 3 centres are located at the research centre DESY at Hamburg, at RWTH Aachen University, and at the University of Karlsruhe. Furthermore the building of a German user analysis facility is planned. Since the CMS community in German is rather small, a good cooperation between the different sites is essential. This cooperation includes physical topics as well as technical and operational issues. All available communication channels such as email, phone, monthly video conferences, and regular personal meetings are used. For example, the distribution of data sets is coordinated globally within Germany. Also the CMS-specific services such as the data transfer tool PhEDEx or the Monte Carlo production are operated by people from different sites in order to spread the knowledge widely and increase the redundancy in terms of operators

  1. CMS users data management service integration and first experiences with its NoSQL data storage

    CERN Document Server

    Riahi, H; Cinquilli, M; Hernandez, J M; Konstantinov, P; Mascheroni, M; Santocchia, A

    2014-01-01

    The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site.The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly 200k users files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and repor...

  2. CMS Data Transfer operations after the first years of LHC collisions

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    CMS experiment possesses distributed computing infrastructure and its performance heavily depends on the fast and smooth distribution of data between different CMS sites. Data must be transferred from the Tier-0 (CERN) to the Tier-1 for storing and archiving, and time and good quality are vital to avoid overflowing CERN storage buffers. At the same time, processed data has to be distributed from Tier-1 sites to all Tier-2 sites for physics analysis while MonteCarlo simulations synchronized back to Tier-1 sites for further archival. At the core of all transferring machinery is PhEDEx (Physics Experiment Data Export) data transfer system. It is very important to ensure reliable operation of the system, and the operational tasks comprise monitoring and debugging all transfer issues. Based on transfer quality information Site Readiness tool is used to create plans for resources utilization in the future. We review the operational procedures created to enforce reliable data delivery to CMS distributed sites all ov...

  3. Cost effective distributed computing for Monte Carlo radiation dosimetry

    International Nuclear Information System (INIS)

    Wise, K.N.; Webb, D.V.

    2000-01-01

    Full text: An inexpensive computing facility has been established for performing repetitive Monte Carlo simulations with the BEAM and EGS4/EGSnrc codes of linear accelerator beams, for calculating effective dose from diagnostic imaging procedures and of ion chambers and phantoms used for the Australian high energy absorbed dose standards. The facility currently consists of 3 dual-processor 450 MHz processor PCs linked by a high speed LAN. The 3 PCs can be accessed either locally from a single keyboard/monitor/mouse combination using a SwitchView controller or remotely via a computer network from PCs with suitable communications software (e.g. Telnet, Kermit etc). All 3 PCs are identically configured to have the Red Hat Linux 6.0 operating system. A Fortran compiler and the BEAM and EGS4/EGSnrc codes are available on the 3 PCs. The preparation of sequences of jobs utilising the Monte Carlo codes is simplified using load-distributing software (enFuzion 6.0 marketed by TurboLinux Inc, formerly Cluster from Active Tools) which efficiently distributes the computing load amongst all 6 processors. We describe 3 applications of the system - (a) energy spectra from radiotherapy sources, (b) mean mass-energy absorption coefficients and stopping powers for absolute absorbed dose standards and (c) dosimetry for diagnostic procedures; (a) and (b) are based on the transport codes BEAM and FLURZnrc while (c) is a Fortran/EGS code developed at ARPANSA. Efficiency gains ranged from 3 for (c) to close to the theoretical maximum of 6 for (a) and (b), with the gain depending on the amount of 'bookkeeping' to begin each task and the time taken to complete a single task. We have found the use of a load-balancing batch processing system with many PCs to be an economical way of achieving greater productivity for Monte Carlo calculations or of any computer intensive task requiring many runs with different parameters. Copyright (2000) Australasian College of Physical Scientists and

  4. Context-aware distributed cloud computing using CloudScheduler

    Science.gov (United States)

    Seuster, R.; Leavett-Brown, CR; Casteels, K.; Driemel, C.; Paterson, M.; Ring, D.; Sobie, RJ; Taylor, RP; Weldon, J.

    2017-10-01

    The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O and high memory) on any cloud. To achieve this goal, we have been implementing changes that utilize context-aware computing designs that are currently employed in the mobile communication industry. Context-awareness makes use of real-time and archived data to respond to user or system requirements. In our distributed cloud, we have many opportunistic clouds with no local HEP services, software or storage repositories. A context-aware design significantly improves the reliability and performance of our system by locating the nearest location of the required services. We describe how we are collecting and managing contextual information from our workload management systems, the clouds, the virtual machines and our services. This information is used not only to monitor the system but also to carry out automated corrective actions. We are incrementally adding new alerting and response services to our distributed cloud. This will enable us to scale the number of clouds and virtual machines. Further, a context-aware design will enable us to run analysis or high I/O application on opportunistic clouds. We envisage an open-source HTTP data federation (for example, the DynaFed system at CERN) as a service that would provide us access to existing storage elements used by the HEP experiments.

  5. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    CERN Document Server

    Andrade, Pedro; Bhatt, Kislay; Chand, Phool; Collados, David; Duggal, Vibhuti; Fuente, Paloma; Hayashi, Soichi; Imamagic, Emir; Joshi, Pradyumna; Kalmady, Rajesh; Karnani, Urvashi; Kumar, Vaibhav; Lapka, Wojciech; Quick, Robert; Tarragon, Jacobo; Teige, Scott; Triantafyllidis, Christos

    2012-01-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO managers, service managers, management), from different middleware providers (ARC, dCache, gLite, UNICORE and VDT), consortiums (WLCG, EMI, EGI, OSG), and operational teams (GOC, OMB, OTAG, CSIRT). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG portal where it is exposed to other clients. This monitoring workflow profits from the i...

  6. 13th International Conference on Distributed Computing and Artificial Intelligence

    CERN Document Server

    Silvestri, Marcello; González, Sara

    2016-01-01

    The special session Decision Economics (DECON) 2016 is a scientific forum by which to share ideas, projects, researches results, models and experiences associated with the complexity of behavioral decision processes aiming at explaining socio-economic phenomena. DECON 2016 held in the University of Seville, Spain, as part of the 13th International Conference on Distributed Computing and Artificial Intelligence (DCAI) 2016. In the tradition of Herbert A. Simon’s interdisciplinary legacy, this book dedicates itself to the interdisciplinary study of decision-making in the recognition that relevant decision-making takes place in a range of critical subject areas and research fields, including economics, finance, information systems, small and international business, management, operations, and production. Decision-making issues are of crucial importance in economics. Not surprisingly, the study of decision-making has received a growing empirical research efforts in the applied economic literature over the last ...

  7. Evaluation of Secure Computation in a Distributed Healthcare Setting.

    Science.gov (United States)

    Kimura, Eizen; Hamada, Koki; Kikuchi, Ryo; Chida, Koji; Okamoto, Kazuya; Manabe, Shirou; Kuroda, Tomohiko; Matsumura, Yasushi; Takeda, Toshihiro; Mihara, Naoki

    2016-01-01

    Issues related to ensuring patient privacy and data ownership in clinical repositories prevent the growth of translational research. Previous studies have used an aggregator agent to obscure clinical repositories from the data user, and to ensure the privacy of output using statistical disclosure control. However, there remain several issues that must be considered. One such issue is that a data breach may occur when multiple nodes conspire. Another is that the agent may eavesdrop on or leak a user's queries and their results. We have implemented a secure computing method so that the data used by each party can be kept confidential even if all of the other parties conspire to crack the data. We deployed our implementation at three geographically distributed nodes connected to a high-speed layer two network. The performance of our method, with respect to processing times, suggests suitability for practical use.

  8. Distributed and multi-core computation of 2-loop integrals

    International Nuclear Information System (INIS)

    De Doncker, E; Yuasa, F

    2014-01-01

    For an automatic computation of Feynman loop integrals in the physical region we rely on an extrapolation technique where the integrals of the sequence are obtained with iterated/repeated adaptive methods from the QUADPACK 1D quadrature package. The integration rule evaluations in the outer level, corresponding to independent inner integral approximations, are assigned to threads dynamically via the OpenMP runtime in the parallel implementation. Furthermore, multi-level (nested) parallelism enables an efficient utilization of hyperthreading or larger numbers of cores. For a class of loop integrals in the unphysical region, which do not suffer from singularities in the interior of the integration domain, we find that the distributed adaptive integration methods in the multivariate PARINT package are highly efficient and accurate. We apply these techniques without resorting to integral transformations and report on the capabilities of the algorithms and the parallel performance for a test set including various types of two-loop integrals

  9. Computational optimization of catalyst distributions at the nano-scale

    International Nuclear Information System (INIS)

    Ström, Henrik

    2017-01-01

    Highlights: • Macroscopic data sampled from a DSMC simulation contain statistical scatter. • Simulated annealing is evaluated as an optimization algorithm with DSMC. • Proposed method is more robust than a gradient search method. • Objective function uses the mass transfer rate instead of the reaction rate. • Combined algorithm is more efficient than a macroscopic overlay method. - Abstract: Catalysis is a key phenomenon in a great number of energy processes, including feedstock conversion, tar cracking, emission abatement and optimizations of energy use. Within heterogeneous, catalytic nano-scale systems, the chemical reactions typically proceed at very high rates at a gas–solid interface. However, the statistical uncertainties characteristic of molecular processes pose efficiency problems for computational optimizations of such nano-scale systems. The present work investigates the performance of a Direct Simulation Monte Carlo (DSMC) code with a stochastic optimization heuristic for evaluations of an optimal catalyst distribution. The DSMC code treats molecular motion with homogeneous and heterogeneous chemical reactions in wall-bounded systems and algorithms have been devised that allow optimization of the distribution of a catalytically active material within a three-dimensional duct (e.g. a pore). The objective function is the outlet concentration of computational molecules that have interacted with the catalytically active surface, and the optimization method used is simulated annealing. The application of a stochastic optimization heuristic is shown to be more efficient within the present DSMC framework than using a macroscopic overlay method. Furthermore, it is shown that the performance of the developed method is superior to that of a gradient search method for the current class of problems. Finally, the advantages and disadvantages of different types of objective functions are discussed.

  10. Recent results from CMS

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    With the increase in center-of-mass energy, a new energy frontier has been opened by the Large Hadron Collider. More than 25 fb^-1 of proton-proton collisions at sqrt(s)=13 TeV have been delivered to both ATLAS and CMS experiments during 2016. This enormous dataset can be used to test the Standard Model in a complete new regime with tremendous precision and it has the potential to unveil new physics or set strong bounds on it. In this talk some of the most recent results made public by the CMS Collaboration will be presented. The focus will mainly be on searches for physics beyond the Standard Model, with particular emphasis on searches for dark matter candidates.

  11. Highlights from CMS

    CERN Document Server

    Autermann, Christian

    2018-01-01

    This article summarizes the latest highlights from the CMS experiment as presented at the Lepton Photon conference 2017 in Guangzhou, China. A selection of the latest physics results, the latest detector upgrades, and the current detector status are discussed. CMS has analyzed the full dataset of proton-proton collision data delivered by the LHC in 2016 at a center-of-mass energy of $13$\\,TeV corresponding to an integrated luminosity of $40$\\,fb$^{-1}$. The leap in center-of-mass energy and in luminosity with respect to the $7$ and $8$\\,TeV runs enabled interesting and relevant new physics results. A new silicon pixel tracking detector was installed during the LHC shutdown 2016/17 and has successfully started operation.

  12. Higgs searches with CMS

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    The excellent performances of the LHC in the 2011 run are setting the grounds for the final chase of the Higgs boson. The CMS experiment is recording high quality data that are being thoroughly scrutinized. Several decay channels are investigated to probe the entire possible Higgs mass spectrum, from 110 to 600 GeV/c^2. The study of the first 1.5/fb of collected data places already tight limits and excludes large fractions of the Higgs mass range, leaving however still open the search in the theoretically favored low mass region. In this seminar we will report on the diverse CMS analyses that yield to such results describing the experimental challenges that each had to meet.

  13. The CMS COLD BOX

    CERN Multimedia

    Brice, Maximilien

    2015-01-01

    The CMS detector is built around a large solenoid magnet. This takes the form of a cylindrical coil of superconducting cable that generates a field of 3.8 Tesla: about 100,000 times the magnetic field of the Earth. To run, this superconducting magnet needs to be cooled down to very low temperature with liquid helium. Providing this is the job of a compressor station and the so-called “cold box”.

  14. Higgs physics at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Holzner, André G., E-mail: andre.georg.holzner@cern.ch [University of California at San Diego (United States); Collaboration: on behalf of the CMS collaboration

    2016-12-15

    This article reviews recent measurements of the properties of the standard model (SM) Higgs boson using data recorded with the CMS detector at the LHC: its mass, width and couplings to other SM particles. We also summarise highlights from searches for new physical phenomena in the Higgs sector as they are proposed in many extensions of the SM: flavour violating and invisible decay modes, resonances decaying into Higgs bosons and searches for additional Higgs bosons.

  15. Dibosons from CMS

    Directory of Open Access Journals (Sweden)

    Martelli Arabella

    2012-06-01

    Full Text Available It is here presented the diboson production cross section measured by the CMS collaboration in pp collisions data at √s = 7 TeV. Wγ and Zγ results from 2010 analyses (36 pb−1 are presented together with 2011 first measurements of WW, WZ and ZZ final states obtained using 1.1 fb−1. Results obtained with 2010 data are also interpreted in term of anomalous triple gauge couplings.

  16. CMS lead tungstate crystals

    CERN Multimedia

    Laurent Guiraud

    2000-01-01

    These crystals are made from lead tungstate, a crystal that is as clear as glass yet with nearly four times the density. They have been produced in Russia to be used as scintillators in the electromagnetic calorimeter on the CMS experiment, part of the LHC project at CERN. When an electron, positron or photon passes through the calorimeter it will cause a cascade of particles that will then be absorbed by these scintillating crystals, allowing the particle's energy to be measured.

  17. The CMS superconducting solenoid

    CERN Multimedia

    Maximilien Brice

    2004-01-01

    The huge solenoid that will generate the magnetic field for the CMS experiment at the LHC is shown stored in the assembly hall above the experimental cavern. The solenoid is made up of five pieces totaling 12.5 m in length and 6 m in diameter. It weighs 220 tonnes and will produce a 4 T magnetic field, 100 000 times the strength of the Earth's magnetic field, storing enough energy to melt 18 tonnes of gold.

  18. Power Consumption Evaluation of Distributed Computing Network Considering Traffic Locality

    Science.gov (United States)

    Ogawa, Yukio; Hasegawa, Go; Murata, Masayuki

    When computing resources are consolidated in a few huge data centers, a massive amount of data is transferred to each data center over a wide area network (WAN). This results in increased power consumption in the WAN. A distributed computing network (DCN), such as a content delivery network, can reduce the traffic from/to the data center, thereby decreasing the power consumed in the WAN. In this paper, we focus on the energy-saving aspect of the DCN and evaluate its effectiveness, especially considering traffic locality, i.e., the amount of traffic related to the geographical vicinity. We first formulate the problem of optimizing the DCN power consumption and describe the DCN in detail. Then, numerical evaluations show that, when there is strong traffic locality and the router has ideal energy proportionality, the system's power consumption is reduced to about 50% of the power consumed in the case where a DCN is not used; moreover, this advantage becomes even larger (up to about 30%) when the data center is located farthest from the center of the network topology.

  19. The CMS conductor

    CERN Document Server

    Horváth, I L; Marti, H P; Neuenschwander, J; Smith, R P; Fabbricatore, P; Musenich, R; Calvo, A; Campi, D; Curé, B; Desirelli, Alberto; Favre, G; Riboni, P L; Sgobba, Stefano; Tardy, T; Sequeira-Lopes-Tavares, S

    2000-01-01

    The Compact Muon Solenoid (CMS) is one of the experiments, which are being designed in the framework of the Large Hadron Collider (LHC) project at CERN, the design field of the CMS magnet is 4 T, the magnetic length is 13 m and the aperture is 6 m. This high magnetic field is achieved by means of a 4 layer, 5 modules superconducting coil. The coil is wound from an Al-stabilized Rutherford type conductor. The nominal current of the magnet is 20 kA at 4.5 K. In the CMS coil the structural function is ensured, unlike in other existing Al-stabilized thin solenoids, both by the Al-alloy reinforced conductor and the external former. In this paper the retained manufacturing process of the 50-km long reinforced conductor is described. In general the Rutherford type cable is surrounded by high purity aluminium in a continuous co-extrusion process to produce the Insert. Thereafter the reinforcement is joined by Electron Beam Welding to the pure Al of the insert, before being machined to the final dimensions. During the...

  20. Adaptive Dynamic Process Scheduling on Distributed Memory Parallel Computers

    Directory of Open Access Journals (Sweden)

    Wei Shu

    1994-01-01

    Full Text Available One of the challenges in programming distributed memory parallel machines is deciding how to allocate work to processors. This problem is particularly important for computations with unpredictable dynamic behaviors or irregular structures. We present a scheme for dynamic scheduling of medium-grained processes that is useful in this context. The adaptive contracting within neighborhood (ACWN is a dynamic, distributed, load-dependent, and scalable scheme. It deals with dynamic and unpredictable creation of processes and adapts to different systems. The scheme is described and contrasted with two other schemes that have been proposed in this context, namely the randomized allocation and the gradient model. The performance of the three schemes on an Intel iPSC/2 hypercube is presented and analyzed. The experimental results show that even though the ACWN algorithm incurs somewhat larger overhead than the randomized allocation, it achieves better performance in most cases due to its adaptiveness. Its feature of quickly spreading the work helps it outperform the gradient model in performance and scalability.

  1. Implementing data placement strategies for the CMS experiment based on a popularity model

    International Nuclear Information System (INIS)

    Barreiro Megino, F H; Cinquilli, M; Giordano, D; Karavakis, E; Girone, M; Magini, N; Mancinelli, V; Spiga, D

    2012-01-01

    During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of data and processed and analyzed it on the distributed, multi-tiered computing infrastructure on the WorldWide LHC Computing Grid. Given the increasing data volume that has to be stored and efficiently analyzed, it is a challenge for several LHC experiments to optimize and automate the data placement strategies in order to fully profit of the available network and storage resources and to facilitate daily computing operations. Building on previous experience acquired by ATLAS, we have developed the CMS Popularity Service that tracks file accesses and user activity on the grid and will serve as the foundation for the evolution of their data placement. A fully automated, popularity-based site-cleaning agent has been deployed in order to scan Tier-2 sites that are reaching their space quota and suggest obsolete, unused data that can be safely deleted without disrupting analysis activity. Future work will be to demonstrate dynamic data placement functionality based on this popularity service and integrate it in the data and workload management systems: as a consequence the pre-placement of data will be minimized and additional replication of hot datasets will be requested automatically. This paper will give an insight into the development, validation and production process and will analyze how the framework has influenced resource optimization and daily operations in CMS.

  2. Constraining nuclear PDFs with CMS

    CERN Document Server

    Chapon, Emilien

    2017-01-01

    Nuclear parton distribution functions are essential to the understanding of proton-lead collisions. We will review several measurements from CMS that are particularly sensitive to nPDFs. W and Z bosons are medium-blind probes of the initial state of the collisions, and we will present the measurements of their production cross sections in pPb collisions at 5.02 TeV, and as well a asymmetries with an increased sensitivity to nPDFs. We will also report measurements of charmonium production, including the nuclear modification factor of J/$\\psi$ and $\\psi$(2S) in pPb collisions at 5.02 TeV, though other cold nuclear matter effects may also be at play in those processes. At last, we will present measurements of the pseudorapidity of dijets in pPb collisions at 5.02 TeV.

  3. CMS Planning and Scheduling System

    CERN Document Server

    Kotamaki, M

    1998-01-01

    The paper describes the procedures and the system to build and maintain the schedules needed to manage time, resources, and progress of the CMS project. The system is based on the decomposition of the project into work packages, which can be each considered as a complete project with its own structure. The system promotes the distribution of the decision making and responsibilities to lower levels in the organisation by providing a state-of-the-art system to formalise the external commitments of the work packages without limiting their ability to modify their internal schedules to best meet their commitments. The system lets the project management focus on the interfaces between the work packages and alerts the management immediately if a conflict arises. The proposed system simplifies the planning and management process and eliminates the need for a large, centralised project management system.

  4. Using the CMS high level trigger as a cloud resource

    International Nuclear Information System (INIS)

    Colling, David; Huffman, Adam; Bauer, Daniela; McCrae, Alison; Cinquilli, Mattia; Gowdy, Stephen; Coarasa, Jose Antonio; Ozga, Wojciech; Chaze, Olivier; Lahiff, Andrew; Grandi, Claudio; Tiradani, Anthony; Sgaravatto, Massimo

    2014-01-01

    The CMS High Level Trigger is a compute farm of more than 10,000 cores. During data taking this resource is heavily used and is an integral part of the experiment's triggering system. However, outside of data taking periods this resource is largely unused. We describe why CMS wants to use the HLT as a cloud resource (outside of data taking periods) and how this has been achieved. In doing this we have turned a single-use cluster into an agile resource for CMS production computing. While we are able to use the HLT as a production cloud resource, there is still considerable further work that CMS needs to carry out before this resource can be used with the desired agility. This report, therefore, represents a snapshot of this activity at the time of CHEP 2013.

  5. Experience in using commercial clouds in CMS

    Energy Technology Data Exchange (ETDEWEB)

    Bauerdick, L. [Fermilab; Bockelman, B. [Nebraska U.; Dykstra, D. [Fermilab; Fuess, S. [Fermilab; Garzoglio, G. [Fermilab; Girone, M. [CERN; Gutsche, O. [Fermilab; Holzman, B. [Fermilab; Hugnagel, D. [Fermilab; Kim, H. [Fermilab; Kennedy, R. [Fermilab; Mason, D. [Fermilab; Spentzouris, P. [Fermilab; Timm, S. [Fermilab; Tiradani, A. [Fermilab; Vaandering, E. [Fermilab

    2017-10-03

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  6. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    Science.gov (United States)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  7. Parallel computation for distributed parameter system-from vector processors to Adena computer

    Energy Technology Data Exchange (ETDEWEB)

    Nogi, T

    1983-04-01

    Research on advanced parallel hardware and software architectures for very high-speed computation deserves and needs more support and attention to fulfil its promise. Novel architectures for parallel processing are being made ready. Architectures for parallel processing can be roughly divided into two groups. One is a vector processor in which a single central processing unit involves multiple vector-arithmetic registers. The other is a processor array in which slave processors are connected to a host processor to perform parallel computation. In this review, the concept and data structure of the Adena (alternating-direction edition nexus array) architecture, which is conformable to distributed-parameter simulation algorithms, are described. 5 references.

  8. Distributed and cloud computing from parallel processing to the Internet of Things

    CERN Document Server

    Hwang, Kai; Fox, Geoffrey C

    2012-01-01

    Distributed and Cloud Computing, named a 2012 Outstanding Academic Title by the American Library Association's Choice publication, explains how to create high-performance, scalable, reliable systems, exposing the design principles, architecture, and innovative applications of parallel, distributed, and cloud computing systems. Starting with an overview of modern distributed models, the book provides comprehensive coverage of distributed and cloud computing, including: Facilitating management, debugging, migration, and disaster recovery through virtualization Clustered systems for resear

  9. CMS Industries awarded gold, crystal

    CERN Multimedia

    2006-01-01

    The CMS collaboration honoured 10 of its top suppliers in the seventh annual awards ceremony The representatives of the firms that recieved the CMS Gold and Crystal Awards stand with their awards after the ceremony. The seventh annual CMS Awards ceremony was held on Monday 13 March to recognize the industries that have made substantial contributions to the construction of the collaboration's detector. Nine international firms received Gold Awards, and General Tecnica of Italy received the prestigious Crystal Award. Representatives from the companies attended the ceremony during the plenary session of CMS week. 'The role of CERN, its machines and experiments, beyond particle physics is to push the development of equipment technologies related to high-energy physics,'said CMS Awards Coordinator Domenico Campi. 'All of these industries must go beyond the technologies that are currently available.' Without the involvement of good companies over the years, the construction of the CMS detector wouldn't be possible...

  10. EBR-II Cover Gas Cleanup System upgrade distributed control and front end computer systems

    International Nuclear Information System (INIS)

    Carlson, R.B.

    1992-01-01

    The Experimental Breeder Reactor II (EBR-II) Cover Gas Cleanup System (CGCS) control system was upgraded in 1991 to improve control and provide a graphical operator interface. The upgrade consisted of a main control computer, a distributed control computer, a front end input/output computer, a main graphics interface terminal, and a remote graphics interface terminal. This paper briefly describes the Cover Gas Cleanup System and the overall control system; gives reasons behind the computer system structure; and then gives a detailed description of the distributed control computer, the front end computer, and how these computers interact with the main control computer. The descriptions cover both hardware and software

  11. PHENIX On-Line Distributed Computing System Architecture

    International Nuclear Information System (INIS)

    Desmond, Edmond; Haggerty, John; Kehayias, Hyon Joo; Purschke, Martin L.; Witzig, Chris; Kozlowski, Thomas

    1997-01-01

    PHENIX is one of the two large experiments at the Relativistic Heavy Ion Collider (RHIC) currently under construction at Brookhaven National Laboratory. The detector consists of 11 sub-detectors, that are further subdivided into 29 units (''granules'') that can be operated independently, which includes simultaneous data taking with independent data streams and independent triggers. The detector has 250,000 channels and is read out by front end modules, where the data is buffered in a pipeline while awaiting the level trigger decision. Zero suppression and calibration is done after the level accept in custom built data collection modules (DCMs) with DSPs before the data is sent to an event builder (design throughput of 2 Gb/sec) and higher level triggers. The On-line Computing Systems Group (ONCS) has two responsibilities. Firstly it is responsible for receiving the data from the event builder, routing it through a network of workstations to consumer processes and archiving it at a data rate of 20 MB/sec. Secondly it is also responsible for the overall configuration, control and operation of the detector and data acquisition chain, which comprises the software integration for several thousand custom built hardware modules. The software must furthermore support the independent operation of the above mentioned granules, which includes the coordination of processes that run in 60-100 VME processors and workstations. ONOS has adapted the Shlaer- Mellor Object Oriented Methodology for the design of the top layer software. CORBA is used as communication layer between the distributed objects, which are implemented as asynchronous finite state machines. We will give an overview of the PHENIX online system with the main focus on the system architecture, software components and integration tasks of the On-line Computing group ONCS and report on the status of the current prototypes

  12. Use of the Web by a Distributed Research group Performing Distributed Computing

    Science.gov (United States)

    Burke, David A.; Peterkin, Robert E.

    2001-06-01

    A distributed research group that uses distributed computers faces a spectrum of challenges--some of which can be met by using various electronic means of communication. The particular challenge of our group involves three physically separated research entities. We have had to link two collaborating groups at AFRL and NRL together for software development, and the same AFRL group with a LANL group for software applications. We are developing and using a pair of general-purpose, portable, parallel, unsteady, plasma physics simulation codes. The first collaboration is centered around a formal weekly video teleconference on relatively inexpensive equipment that we have set up in convenient locations in our respective laboratories. The formal virtual meetings are augmented with informal virtual meetings as the need arises. Both collaborations share research data in a variety of forms on a secure URL that is set up behind the firewall at the AFRL. Of course, a computer-generated animation is a particularly efficient way of displaying results from time-dependent numerical simulations, so we generally like to post such animations (along with proper documentation) on our web page. In this presentation, we will discuss some of our accomplishments and disappointments.

  13. Maintaining Traceability in an Evolving Distributed Computing Environment

    Science.gov (United States)

    Collier, I.; Wartel, R.

    2015-12-01

    The management of risk is fundamental to the operation of any distributed computing infrastructure. Identifying the cause of incidents is essential to prevent them from re-occurring. In addition, it is a goal to contain the impact of an incident while keeping services operational. For response to incidents to be acceptable this needs to be commensurate with the scale of the problem. The minimum level of traceability for distributed computing infrastructure usage is to be able to identify the source of all actions (executables, file transfers, pilot jobs, portal jobs, etc.) and the individual who initiated them. In addition, sufficiently fine-grained controls, such as blocking the originating user and monitoring to detect abnormal behaviour, are necessary for keeping services operational. It is essential to be able to understand the cause and to fix any problems before re-enabling access for the user. The aim is to be able to answer the basic questions who, what, where, and when concerning any incident. This requires retaining all relevant information, including timestamps and the digital identity of the user, sufficient to identify, for each service instance, and for every security event including at least the following: connect, authenticate, authorize (including identity changes) and disconnect. In traditional grid infrastructures (WLCG, EGI, OSG etc.) best practices and procedures for gathering and maintaining the information required to maintain traceability are well established. In particular, sites collect and store information required to ensure traceability of events at their sites. With the increased use of virtualisation and private and public clouds for HEP workloads established procedures, which are unable to see 'inside' running virtual machines no longer capture all the information required. Maintaining traceability will at least involve a shift of responsibility from sites to Virtual Organisations (VOs) bringing with it new requirements for their

  14. Top quark mass measurements with CMS

    CERN Document Server

    Kovalchuk, Nataliia

    2017-01-01

    Measurements of the top quark mass are presented, obtained from CMS data collected in proton-proton collisions at the LHC at centre-of-mass energies of 7 TeV and 8 TeV. The mass of the top quark is measured using several methods and channels, including the reconstructed invariant mass distribution of the top quark, an analysis of endpoint spectra as well as measurements from shapes of top quark decay distributions. The dependence of the mass measurement on the kinematic phase space is investigated. The results of the various channels are combined and compared to the world average. The top mass and also $\\alpha_{\\textnormal S}$ are extracted from the top pair cross section measured at CMS.

  15. CMS Create #2 | 3-4 October | Register now!

    CERN Multimedia

    2016-01-01

    CMS Create brings together CERN members and students from IPAC Design Genève (see here). The goal is to build a prototype exhibit illustrating what CMS does and how it does it. The exhibit will introduce the world of a particle physics detector to the general public, and to younger visitors in particular.    CMS Create, hosted by IdeaSquare, was first held in November 2015. There were 4 highly diverse teams made of participants from many educational backgrounds and from 15 nationalities. 36% of these were women; a figure we hope will grow this year. The 25 participants were CMS physicists, computer scientists, engineers, other CMS collaborators and IPAC students. The 2015 winning exhibit is now permanently installed in the visitor reception centre at CMS Point 5, which was visited by 20.600 visitors during 2015. Are you creative and motivated to share your ideas?  Take part in CMS Create #2, meet with scientists and designers from all over the world and explain to CER...

  16. Radiation background with the CMS RPCs at the LHC

    CERN Document Server

    Costantini, Silvia; Cai, J.; Li, Q.; Liu, S.; Qian, S.; Wang, D.; Xu, Z.; Zhang, F.; Choi, Y.; Goh, J.; Kim, D.; Choi, S.; Hong, B.; Kang, J.W.; Kang, M.; Kwon, J.H.; Lee, K.S.; Lee, S.K.; Park, S.K.; Pant, L.M.; Mohanty, A.K.; Chudasama, R.; Singh, J.B.; Bhatnagar, V.; Mehta, A.; Kumar, R.; Cauwenbergh, S.; Cimmino, A.; Crucy, S.; Fagot, A.; Garcia, G.; Ocampo, A.; Poyraz, D.; Salva, S.; Thyssen, F.; Tytgat, M.; Zaganidis, N.; Doninck, W.V.; Cabrera, A.; Chaparro, L.; Gomez, J.P.; Gomez, B.; Sanabria, J.C.; Avila, C.; Ahmad, A.; Muhammad, S.; Shoaib, M.; Hoorani, H.; Awan, I.; Ali, I.; Ahmed, W.; Asghar, M.I.; Shahzad, H.; Sayed, A.; Ibrahim, A.; Aly, S.; Assran, Y.; Radi, A.; Elkafrawy, T.; Sharma, A.; Colafranceschi, S.; Abbrescia, M.; Calabria, C.; Colaleo, A.; Iaselli, G.; Loddo, F.; Maggi, M.; Nuzzo, S.; Pugliese, G.; Radogna, R.; Venditti, R.; Verwilligen, P.; Benussi, L.; Bianco, S.; Piccolo, D.; Paolucci, P.; Buontempo, S.; Cavallo, N.; Merola, M.; Fabozzi, F.; Iorio, O.M.; Braghieri, A.; Montagna, P.; Riccardi, C.; Salvini, P.; Vitulo, P.; Vai, I.; Magnani, A.; Dimitrov, A.; Litov, L.; Pavlov, B.; Petkov, P.; Aleksandrov, A.; Genchev, V.; Iaydjiev, P.; Rodozov, M.; Sultanov, G.; Vutova, M.; Stoykova, S.; Hadjiiska, R.; Ibargüen, H.S.; Morales, M.I.P.; Bernardino, S.C.; Bagaturia, I.; Tsamalaidze, Z.; Crotty, I.; Kim, M.S.

    2015-05-28

    The Resistive Plate Chambers (RPCs) are employed in the CMS experiment at the LHC as dedicated trigger system both in the barrel and in the endcap. This note presents results of the radiation background measurements performed with the 2011 and 2012 proton-proton collision data collected by CMS. Emphasis is given to the measurements of the background distribution inside the RPCs. The expected background rates during the future running of the LHC are estimated both from extrapolated measurements and from simulation.

  17. Using distributed processing on a local area network to increase available computing power

    International Nuclear Information System (INIS)

    Capps, K.S.; Sherry, K.J.

    1996-01-01

    The migration from central computers to desktop computers distributed the total computing horsepower of a system over many different machines. A typical engineering office may have several networked desktop computers that are sometimes idle, especially after work hours and when people are absent. Users would benefit if applications were able to use these networked computers collectively. This paper describes a method of distributing the workload of an application on one desktop system to otherwise idle systems on the network. The authors present this discussion from a developer's viewpoint, because the developer must modify an application before the user can realize any benefit of distributed computing on available systems

  18. Las Vegas is better than determinism in VLSI and distributed computing

    DEFF Research Database (Denmark)

    Mehlhorn, Kurt; Schmidt, Erik Meineche

    1982-01-01

    In this paper we describe a new method for proving lower bounds on the complexity of VLSI - computations and more generally distributed computations. Lipton and Sedgewick observed that the crossing sequence arguments used to prove lower bounds in VLSI (or TM or distributed computing) apply to (ac...

  19. Computational Aspects of Sensor Network Protocols (Distributed Sensor Network Simulator

    Directory of Open Access Journals (Sweden)

    Vasanth Iyer

    2009-08-01

    Full Text Available In this work, we model the sensor networks as an unsupervised learning and clustering process. We classify nodes according to its static distribution to form known class densities (CCPD. These densities are chosen from specific cross-layer features which maximizes lifetime of power-aware routing algorithms. To circumvent computational complexities of a power-ware communication STACK we introduce path-loss models at the nodes only for high density deployments. We study the cluster heads and formulate the data handling capacity for an expected deployment and use localized probability models to fuse the data with its side information before transmission. So each cluster head has a unique Pmax but not all cluster heads have the same measured value. In a lossless mode if there are no faults in the sensor network then we can show that the highest probability given by Pmax is ambiguous if its frequency is ≤ n/2 otherwise it can be determined by a local function. We further show that the event detection at the cluster heads can be modelled with a pattern 2m and m, the number of bits can be a correlated pattern of 2 bits and for a tight lower bound we use 3-bit Huffman codes which have entropy < 1. These local algorithms are further studied to optimize on power, fault detection and to maximize on the distributed routing algorithm used at the higher layers. From these bounds in large network, it is observed that the power dissipation is network size invariant. The performance of the routing algorithms solely based on success of finding healthy nodes in a large distribution. It is also observed that if the network size is kept constant and the density of the nodes is kept closer then the local pathloss model effects the performance of the routing algorithms. We also obtain the maximum intensity of transmitting nodes for a given category of routing algorithms for an outage constraint, i.e., the lifetime of sensor network.

  20. Opportunistic usage of the CMS online cluster using a cloud overlay

    CERN Document Server

    Chaze, Olivier; Andronidis, Anastasios; Behrens, Ulf; Branson, James; Brummer, Philipp; Contescu, Alexandru-Cristian; Cittolin, Sergio; Craigs, Benjamin; Darlea, Georgiana-Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, M; Doualot, Nicolas; Erhan, Samim; Fulcher, Jonathan Richard; Gigi, Dominique; Glege, Frank; Gomez-Ceballos, Guillelmo; Hegeman, Jeroen; Holzner, Andre Georg; Jimenez-Estupiñán, Raul; Masetti, Lorenzo; Meijers, Frans; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph; Pieri, Marco; Racz, Attila; Sakulin, Hannes; Schwick, Christoph; Reis, Thomas; Simelevicius, Dainius; Zejdl, Petr

    2016-01-01

    After two years of maintenance and upgrade, the Large Hadron Collider (LHC), the largest and most powerful particle accelerator in the world, has started its second three year run. Around 1500 computers make up the CMS (Compact Muon Solenoid) Online cluster. This cluster is used for Data Acquisition of the CMS experiment at CERN, selecting and sending to storage around 20 TBytes of data per day that are then analysed by the Worldwide LHC Computing Grid (WLCG) infrastructure that links hundreds of data centres worldwide. 3000 CMS physicists can access and process data, and are always seeking more computing power and data. The backbone of the CMS Online cluster is composed of 16000 cores which provide as much computing power as all CMS WLCG Tier1 sites (352K HEP-SPEC-06 score in the CMS cluster versus 300K across CMS Tier1 sites). The computing power available in the CMS cluster can significantly speed up the processing of data, so an effort has been made to allocate the resources of the CMS Online cluster to t...

  1. High threshold distributed quantum computing with three-qubit nodes

    International Nuclear Information System (INIS)

    Li Ying; Benjamin, Simon C

    2012-01-01

    In the distributed quantum computing paradigm, well-controlled few-qubit ‘nodes’ are networked together by connections which are relatively noisy and failure prone. A practical scheme must offer high tolerance to errors while requiring only simple (i.e. few-qubit) nodes. Here we show that relatively modest, three-qubit nodes can support advanced purification techniques and so offer robust scalability: the infidelity in the entanglement channel may be permitted to approach 10% if the infidelity in local operations is of order 0.1%. Our tolerance of network noise is therefore an order of magnitude beyond prior schemes, and our architecture remains robust even in the presence of considerable decoherence rates (memory errors). We compare the performance with that of schemes involving nodes of lower and higher complexity. Ion traps, and NV-centres in diamond, are two highly relevant emerging technologies: they possess the requisite properties of good local control, rapid and reliable readout, and methods for entanglement-at-a-distance. (paper)

  2. Above the cloud computing orbital services distributed data model

    Science.gov (United States)

    Straub, Jeremy

    2014-05-01

    Technology miniaturization and system architecture advancements have created an opportunity to significantly lower the cost of many types of space missions by sharing capabilities between multiple spacecraft. Historically, most spacecraft have been atomic entities that (aside from their communications with and tasking by ground controllers) operate in isolation. Several notable example exist; however, these are purpose-designed systems that collaborate to perform a single goal. The above the cloud computing (ATCC) concept aims to create ad-hoc collaboration between service provider and consumer craft. Consumer craft can procure processing, data transmission, storage, imaging and other capabilities from provider craft. Because of onboard storage limitations, communications link capability limitations and limited windows of communication, data relevant to or required for various operations may span multiple craft. This paper presents a model for the identification, storage and accessing of this data. This model includes appropriate identification features for this highly distributed environment. It also deals with business model constraints such as data ownership, retention and the rights of the storing craft to access, resell, transmit or discard the data in its possession. The model ensures data integrity and confidentiality (to the extent applicable to a given data item), deals with unique constraints of the orbital environment and tags data with business model (contractual) obligation data.

  3. Client/server models for transparent, distributed computational resources

    International Nuclear Information System (INIS)

    Hammer, K.E.; Gilman, T.L.

    1991-01-01

    Client/server models are proposed to address issues of shared resources in a distributed, heterogeneous UNIX environment. Recent development of automated Remote Procedure Call (RPC) interface generator has simplified the development of client/server models. Previously, implementation of the models was only possible at the UNIX socket level. An overview of RPCs and the interface generator will be presented and will include a discussion of generation and installation of remote services, the RPC paradigm, and the three levels of RPC programming. Two applications, the Nuclear Plant Analyzer (NPA) and a fluids simulation using molecular modelling, will be presented to demonstrate how client/server models using RPCs and External Data Representations (XDR) have been used production/computation situations. The NPA incorporates a client/server interface for transferring/translation of TRAC or RELAP results from the UNICOS Cray to a UNIX workstation. The fluids simulation program utilizes the client/server model to access the Cray via a single function allowing it to become a shared co-processor to the workstation application. 5 refs., 6 figs

  4. Evaluating Emulation-based Models of Distributed Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Stephen T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Gabert, Kasimir G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Tarman, Thomas D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Emulytics Initiatives

    2017-08-01

    Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses and describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.

  5. CMS silicon tracker developments

    International Nuclear Information System (INIS)

    Civinini, C.; Albergo, S.; Angarano, M.; Azzi, P.; Babucci, E.; Bacchetta, N.; Bader, A.; Bagliesi, G.; Basti, A.; Biggeri, U.; Bilei, G.M.; Bisello, D.; Boemi, D.; Bosi, F.; Borrello, L.; Bozzi, C.; Braibant, S.; Breuker, H.; Bruzzi, M.; Buffini, A.; Busoni, S.; Candelori, A.; Caner, A.; Castaldi, R.; Castro, A.; Catacchini, E.; Checcucci, B.; Ciampolini, P.; Creanza, D.; D'Alessandro, R.; Da Rold, M.; Demaria, N.; De Palma, M.; Dell'Orso, R.; Della Marina, R.D.R.; Dutta, S.; Eklund, C.; Feld, L.; Fiore, L.; Focardi, E.; French, M.; Freudenreich, K.; Frey, A.; Fuertjes, A.; Giassi, A.; Giorgi, M.; Giraldo, A.; Glessing, B.; Gu, W.H.; Hall, G.; Hammarstrom, R.; Hebbeker, T.; Honma, A.; Hrubec, J.; Huhtinen, M.; Kaminsky, A.; Karimaki, V.; Koenig, St.; Krammer, M.; Lariccia, P.; Lenzi, M.; Loreti, M.; Luebelsmeyer, K.; Lustermann, W.; Maettig, P.; Maggi, G.; Mannelli, M.; Mantovani, G.; Marchioro, A.; Mariotti, C.; Martignon, G.; Evoy, B. Mc; Meschini, M.; Messineo, A.; Migliore, E.; My, S.; Paccagnella, A.; Palla, F.; Pandoulas, D.; Papi, A.; Parrini, G.; Passeri, D.; Pieri, M.; Piperov, S.; Potenza, R.; Radicci, V.; Raffaelli, F.; Raymond, M.; Santocchia, A.; Schmitt, B.; Selvaggi, G.; Servoli, L.; Sguazzoni, G.; Siedling, R.; Silvestris, L.; Starodumov, A.; Stavitski, I.; Stefanini, G.; Surrow, B.; Tempesta, P.; Tonelli, G.; Tricomi, A.; Tuuva, T.; Vannini, C.; Verdini, P.G.; Viertel, G.; Xie, Z.; Yahong, Li; Watts, S.; Wittmer, B.

    2002-01-01

    The CMS Silicon tracker consists of 70 m 2 of microstrip sensors which design will be finalized at the end of 1999 on the basis of systematic studies of device characteristics as function of the most important parameters. A fundamental constraint comes from the fact that the detector has to be operated in a very hostile radiation environment with full efficiency. We present an overview of the current results and prospects for converging on a final set of parameters for the silicon tracker sensors

  6. Large scale and low latency analysis facilities for the CMS experiment: development and operational aspects

    CERN Document Server

    Riahi, Hassen

    2010-01-01

    While a majority of CMS data analysis activities rely on the distributed computing infrastructure on the WLCG Grid, dedicated local computing facilities have been deployed to address particular requirements in terms of latency and scale. The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workfows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast turnaround. In order to reach the goal for fast turnaround tasks, the Workload Management group has designed a CRABServer based system to fit with two main needs: to provide a simple, familiar interface to the user (as used in the CRAB Analysis Tool[7]) and to allow an easy transition to the Tier-0 system. While the CRABServer component had been initially designed for Grid analysis by CMS end-users, with a few modifications it turned out to be also a very powerful service to manage and monitor local submissions on the CAF. Tran...

  7. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel; Buse, Gerrit; Pfluger, Dirk

    2012-01-01

    of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute

  8. The CMS detector magnet

    CERN Document Server

    Hervé, A

    2000-01-01

    CMS (Compact Muon Solenoid) is a general-purpose detector designed to run in mid-2005 at the highest luminosity at the LHC at CERN. Its distinctive features include a 6 m free bore diameter, 12.5 m long, 4 T superconducting solenoid enclosed inside a 10,000 tonne return yoke. The magnet will be assembled and tested on the surface by the end of 2003 before being transferred by heavy lifting means to a 90 m deep underground experimental area. The design and construction of the magnet is a `common project' of the CMS Collaboration. It is organized by a CERN based group with strong technical and contractual participation by CEA Saclay, ETH Zurich, Fermilab Batavia IL, INFN Geneva, ITEP Moscow, University of Wisconsin and CERN. The return yoke, 21 m long and 14 m in diameter, is equivalent to 1.5 m of saturated iron interleaved with four muon stations. The yoke and the vacuum tank are being manufactured. The indirectly-cooled, pure- aluminium-stabilized coil is made up from five modules internally wound with four ...

  9. Hadron correlations in CMS

    CERN Document Server

    Maguire, Charles Felix

    2012-01-01

    The measurements of the anisotropic flow of single particles and particle pairs have provided some of the most compelling evidence for the creation of a strongly interacting quark-gluon plasma (sQGP) in relativistic heavy ion collisions, first at RHIC, and more recently at the LHC. Using PbPb collision data taken in the 2010 and 2011 heavy ion runs at the LHC, the CMS experiment has investigated a broad scope of these flow phenomena. The $v_2$ elliptic flow coefficient has been extracted with four different methods to cross-check contributions from initial state fluctuations and non-flow correlations. The measurements of the $v_2$ elliptic anisotropy have been extended to a transverse momentum of 60 GeV/c, which will enable the placement of new quantitative constraints on parton energy loss models as a function of path length in the sQGP medium. Additionally, for the first time at the LHC, the CMS experiment has extracted precise elliptic anisotropy coefficients for the neutral $\\pi$ meson ($\\pi^0$) in the c...

  10. The CMS Event Builder

    CERN Document Server

    Brigljevic, V; Cano, E; Cittolin, Sergio; Csilling, Akos; Gigi, D; Glege, F; Gómez-Reino, Robert; Gulmini, M; Gutleber, J; Jacobs, C; Kozlovszky, Miklos; Larsen, H; Magrans de Abril, Ildefons; Meijers, F; Meschi, E; Murray, S; Oh, A; Orsini, L; Pollet, L; Rácz, A; Samyn, D; Scharff-Hansen, P; Schwick, C; Sphicas, Paris; ODell, V; Suzuki, I; Berti, L; Maron, G; Toniolo, N; Zangrando, L; Ninane, A; Erhan, S; Bhattacharya, S; Branson, J G

    2003-01-01

    The data acquisition system of the CMS experiment at the Large Hadron Collider will employ an event builder which will combine data from about 500 data sources into full events at an aggregate throughput of 100 GByte/s. Several architectures and switch technologies have been evaluated for the DAQ Technical Design Report by measurements with test benches and by simulation. This paper describes studies of an EVB test-bench based on 64 PCs acting as data sources and data consumers and employing both Gigabit Ethernet and Myrinet technologies as the interconnect. In the case of Ethernet, protocols based on Layer-2 frames and on TCP/IP are evaluated. Results from ongoing studies, including measurements on throughput and scaling are presented. The architecture of the baseline CMS event builder will be outlined. The event builder is organised into two stages with intelligent buffers in between. The first stage contains 64 switches performing a first level of data concentration by building super-fragments from fragmen...

  11. CMS Tracker Model

    CERN Multimedia

    Model of the tracking detector for the CMS experiment at the LHC. This object is a mock-up of an early design of the CMS Tracker mechanics. It is a segment of a “Wheel” to support Micro-Strip Gas Chamber (MSGC) detector modules on the outer layers and silicon-strip detector modules in the innermost layers. The particularity of that design is that modules are organised in spirals, along which power and optical cables and cooling pipes were planned to be routed. Some of such spirals are illustrated in the mock-up by the colors of the modules. With the detector development it became, however, evident that the silicon detectors would need to be operated in LHC experiments in cold temperatures, while the MSGC could stay in normal room-temperature. That split in two temperatures lead to separating those two detector types by a thermal barrier and therefore jeopardizing the idea of using common, vertical Wheels with services arranged along spirals.

  12. CMS ready for winding up

    CERN Multimedia

    2003-01-01

    End of October, the last lengths of conductor for the CMS superconducting solenoid have been produced. This is another large sub-project of the CMS Magnet being successfully finished, after completion of the Yoke last year (see Bulletin 43/2002).

  13. CMS Data Analysis School Model

    CERN Document Server

    Malik, Sudhir; Cavanaugh, R; Bloom, K; Chan, Kai-Feng; D'Hondt, J; Klima, B; Narain, M; Palla, F; Rolandi, G; Schörner-Sadenius, T

    2014-01-01

    To impart hands-on training in physics analysis, CMS experiment initiated the  concept of CMS Data Analysis School (CMSDAS). It was born three years ago at the LPC (LHC Physics Center), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe , CMS is trying to  engage the collaboration discovery potential and maximize the physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents of CMS, in the development of physics, service, upgrade, education of those new to CMS and the caree...

  14. Massive calculations of electrostatic potentials and structure maps of biopolymers in a distributed computing environment

    International Nuclear Information System (INIS)

    Akishina, T.P.; Ivanov, V.V.; Stepanenko, V.A.

    2013-01-01

    Among the key factors determining the processes of transcription and translation are the distributions of the electrostatic potentials of DNA, RNA and proteins. Calculations of electrostatic distributions and structure maps of biopolymers on computers are time consuming and require large computational resources. We developed the procedures for organization of massive calculations of electrostatic potentials and structure maps for biopolymers in a distributed computing environment (several thousands of cores).

  15. Performance of the CMS Event Builder

    CERN Document Server

    Andre, Jean-Marc Olivier; Branson, James; Brummer, Philipp Maximilian; Chaze, Olivier; Cittolin, Sergio; Contescu, Cristian; Craigs, Benjamin Gordon; Darlea, Georgiana Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, Marc; Doualot, Nicolas; Erhan, Samim; Fulcher, Jonathan Richard; Gigi, Dominique; Gladki, Maciej Szymon; Glege, Frank; Gomez Ceballos, Guillelmo; Hegeman, Jeroen Guido; Holzner, Andre Georg; Janulis, Mindaugas; Jimenez Estupinan, Raul; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrova, Petia; Pieri, Marco; Racz, Attila; Reis, Thomas; Sakulin, Hannes; Schwick, Christoph; Simelevicius, Dainius; Zejdl, Petr

    2017-01-01

    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz. It transports event data at an aggregate throughput of ~100 GB/s to the high-level trigger (HLT) farm. The CMS DAQ system has been completely rebuilt during the first long shutdown of the LHC in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gb/s Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gb/s Infiniband FDR CLOS network has been chosen for the event builder. We report on the performance of the event builder system and the steps taken to exploit the full potential of the network technologies.

  16. Evaluation of DEC`s GIGAswitch for distributed parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H.; Hutchins, J.; Brandt, J.

    1993-10-01

    One of Sandia`s research efforts is to reduce the end-to-end communication delay in a parallel-distributed computing environment. GIGAswitch is DEC`s implementation of a gigabit local area network based on switched FDDI technology. Using the GIGAswitch, the authors intend to minimize the medium access latency suffered by shared-medium FDDI technology. Experimental results show that the GIGAswitch adds 16.5 microseconds of switching and bridging delay to an end-to-end communication. Although the added latency causes a 1.8% throughput degradation and a 5% line efficiency degradation, the availability of dedicated bandwidth is much more than what is available to a workstation on a shared medium. For example, ten directly connected workstations each would have a dedicated bandwidth of 95 Mbps, but if they were sharing the FDDI bandwidth, each would have 10% of the total bandwidth, i.e., less than 10 Mbps. In addition, they have found that when there is no output port contention, the switch`s aggregate bandwidth will scale up to multiples of its port bandwidth. However, with output port contention, the throughput and latency performance suffered significantly. Their mathematical and simulation models indicate that the GIGAswitch line efficiency could be as low as 63% when there are nine input ports contending for the same output port. The data indicate that the delay introduced by contention at the server workstation is 50 times that introduced by the GIGAswitch. The authors conclude that the GIGAswitch meets the performance requirements of today`s high-end workstations and that the switched FDDI technology provides an alternative that utilizes existing workstation interfaces while increasing the aggregate bandwidth. However, because the speed of workstations is increasing by a factor of 2 every 1.5 years, the switched FDDI technology is only good as an interim solution.

  17. Microdot - A Four-Bit Microcontroller Designed for Distributed Low-End Computing in Satellites

    National Research Council Canada - National Science Library

    2002-01-01

    .... An alternative design approach is a distributed network of small and low power microcontrollers designed for space that handle the computing requirements of each individual sensor and actuator...

  18. A wireless computational platform for distributed computing based traffic monitoring involving mixed Eulerian-Lagrangian sensing

    KAUST Repository

    Jiang, Jiming

    2013-06-01

    This paper presents a new wireless platform designed for an integrated traffic monitoring system based on combined Lagrangian (mobile) and Eulerian (fixed) sensing. The sensor platform is built around a 32-bit ARM Cortex M4 micro-controller and a 2.4GHz 802.15.4 ISM compliant radio module, and can be interfaced with fixed traffic sensors, or receive data from vehicle transponders. The platform is specially designed and optimized to be integrated in a solar-powered wireless sensor network in which traffic flow maps are computed by the nodes directly using distributed computing. A MPPT circuitry is proposed to increase the power output of the attached solar panel. A self-recovering unit is designed to increase reliability and allow periodic hard resets, an essential requirement for sensor networks. A radio monitoring circuitry is proposed to monitor incoming and outgoing transmissions, simplifying software debug. An ongoing implementation is briefly discussed, and compared with existing platforms used in wireless sensor networks. © 2013 IEEE.

  19. Monitoring the CMS Data Acquisition System

    CERN Document Server

    Bauer, Gerry; Biery, K; Branson, J; Cano, E; Cheung, H; Ciganek, M; Cittolin, S; Coarasa, J A; Deldicque, C; Dusinberre, E; Erhan, S; Fortes Rodrigues, F; Gigi, D; Glege, F; Gomez-Reino, R; Gutleber, J; Hatton, D; Laurens, J F; Lopez Perez, J A; Meijers, F; Meschi, E; Meyer, A; Mommsen, R; Moser, R; O'Dell, V; Oh, A; Orsini, L B; Patras, V; Paus, C; Petrucci, A; Pieri, M; Racz, A; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Shpakov, D; Simon, S; Sumorok, K; Zanetti, M.

    2010-01-01

    The CMS data acquisition system comprises O(20000) interdependent services that need to be monitored in near real-time. The ability to monitor a large number of distributed applications accurately and effectively is of paramount importance for robust operations. Application monitoring entails the collection of a large number of simple and composed values made available by the software components and hardware devices. A key aspect is that detection of deviations from a specified behaviour is supported in a timely manner, which is a prerequisite in order to take corrective actions efficiently. Given the size and time constraints of the CMS data acquisition system, efficient application monitoring is an interesting research problem. We propose an approach that uses the emerging paradigm of Web-service based eventing systems in combination with hierarchical data collection and load balancing. Scalability and efficiency are achieved by a decentralized architecture, splitting up data collections into regions of col...

  20. Efficient Monitoring of CRAB Jobs at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Silva, J. M.D. [Sao Paulo, IFT; Balcas, J. [Caltech; Belforte, S. [INFN, Trieste; Ciangottini, D. [INFN, Perugia; Mascheroni, M. [Fermilab; Rupeika, E. A. [Vilnius U.; Ivanov, T. T. [Sofiya U.; Hernandez, J. M. [Madrid, CIEMAT; Vaandering, E. [Fermilab

    2017-11-22

    CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates the design choices and gives a report on our experience with the tools we developed and the external ones we used.

  1. CERN Open Days 2013, Point 5 - CMS: CMS Experiment

    CERN Multimedia

    CERN Photolab

    2013-01-01

    Stand description: Come to LHC's Point 5 and visit the Compact Muon Solenoid (CMS) experiment that discovered the Higgs boson ! Descend 100 metres underground and take a walk in the cathedral-sized cavern housing the 14,000-tonne CMS detector. Ask Higgs hunters and other scientists just about anything, be it questions about their work, particle physics or the engineering challenges of building CMS.  On surface no restricted access  Point 5 will be abuzz all day long with activities for all ages, including literally "cool" cryogenics shows featuring the world's fastest ice-cream maker, dance performances, and much more.

  2. CMS Tracker Visualisation

    CERN Document Server

    Mennea, Maria Santa; Zito, Giuseppe

    2004-01-01

    To provide improvements in the performance of existing tracker data visualization tools in IGUANA, a 2D visualisation software has been developed, using the object oriented paradigm and software engineering techniques. We have designed 2D graphics objects and some of them have been implemented. The access to the new objects is made in ORCA plugin of IGUANA CMS. A new tracker object oriented model has been designed for developing these 2D graphics objects. The model consists of new classes which represent all its components (layers, modules, rings, petals, rods).The new classes are described here. The last part of this document contains a user manual of the software and will be updated with new releases.

  3. The CMS silicon tracker

    International Nuclear Information System (INIS)

    Focardi, E.; Albergo, S.; Angarano, M.; Azzi, P.; Babucci, E.; Bacchetta, N.; Bader, A.; Bagliesi, G.; Basti, A.; Biggeri, U.; Bilei, G.M.; Bisello, D.; Boemi, D.; Bosi, F.; Borrello, L.; Bozzi, C.; Braibant, S.; Breuker, H.; Bruzzi, M.; Buffini, A.; Busoni, S.; Candelori, A.; Caner, A.; Castaldi, R.; Castro, A.; Catacchini, E.; Checcucci, B; Ciampolini, P.; Civinini, C.; Creanza, D.; D'Alessandro, R.; Da Rold, M.; Demaria, N.; De Palma, M.; Dell'Orso, R.; Della Marina, R.; Dutta, S.; Eklund, C.; Feld, L.; Fiore, L.; French, M.; Freudenreich, K.; Frey, A.; Fuertjes, A.; Giassi, A.; Giorgi, M.; Giraldo, A.; Glessing, B.; Gu, W.H.; Hall, G.; Hammarstrom, R.; Hebbeker, T.; Honma, A.; Hrubec, J.; Huhtinen, M.; Kaminsky, A.; Karimaki, V.; Koenig, St.; Krammer, M.; Lariccia, P.; Lenzi, M.; Loreti, M.; Leubelsmeyer, K.; Lustermann, W.; Maettig, P.; Maggi, G.; Mannelli, M.; Mantovani, G.; Marchioro, A.; Mariotti, C.; Martignon, G.; Evoy, B.Mc; Meschini, M.; Messineo, A.; Migliore, E.; My, S.; Paccagnella, A.; Palla, F.; Pandoulas, D.; Papi, A.; Parrini, G.; Passeri, D.; Pieri, M.; Piperov, S.; Potenza, R.; Radicci, V.; Raffaelli, F.; Raymond, M.; Rizzo, F.; Santocchia, A.; Schmitt, B.; Selvaggi, G.; Servoli, L.; Sguazzoni, G.; Siedling, R.; Silvestris, L.; Starodumov, A.; Stavitski, I.; Stefanini, G.; Surrow, B.; Tempesta, P.; Tonelli, G.; Tricomi, A.; Tuuva, T.; Vannini, C.; Verdini, P.G.; Viertel, G.; Xie, Z.; Yahong, Li; Watts, S.; Wittmer, B.

    2000-01-01

    This paper describes the Silicon microstrip Tracker of the CMS experiment at LHC. It consists of a barrel part with 5 layers and two endcaps with 10 disks each. About 10 000 single-sided equivalent modules have to be built, each one carrying two daisy-chained silicon detectors and their front-end electronics. Back-to-back modules are used to read-out the radial coordinate. The tracker will be operated in an environment kept at a temperature of T=-10 deg. C to minimize the Si sensors radiation damage. Heavily irradiated detectors will be safely operated due to the high-voltage capability of the sensors. Full-size mechanical prototypes have been built to check the system aspects before starting the construction

  4. CMS pixel upgrade project

    CERN Document Server

    Kaestli, Hans-Christian

    2010-01-01

    The LHC machine at CERN finished its first year of pp collisions at a center of mass energy of 7~TeV. While the commissioning to exploit its full potential is still ongoing, there are plans to upgrade its components to reach instantaneous luminosities beyond the initial design value after 2016. A corresponding upgrade of the innermost part of the CMS detector, the pixel detector, is needed. A full replacement of the pixel detector is planned in 2016. It will not only address limitations of the present system at higher data rates, but will aggressively lower the amount of material inside the fiducial tracking volume which will lead to better tracking and b-tagging performance. This article gives an overview of the project and illuminates the motivations and expected improvements in the detector performance.

  5. Luminosity measurement at CMS

    CERN Document Server

    Leonard, Jessica Lynn

    2014-01-01

    The measurement of the luminosity delivered by the LHC is pivotal for several key physics analyses. During the first three years of running, tremendous steps forwards have been made in the comprehension of the subtleties related to luminosity monitoring and calibration, which led to an unprecedented accuracy at a hadron collider. The detectors and corresponding algorithms employed to estimate online and offline the luminosity in CMS are described. Details are given concerning the procedure based on the Van der Meer scan technique that allowed a very precise calibration of the luminometers from the determination of the LHC beams parameters. What is being prepared in terms of detector and online software upgrades for the next LHC run is also summarized.

  6. CMS hadronic forward calorimeter

    International Nuclear Information System (INIS)

    Merlo, J.P.

    1998-01-01

    Tests of quartz fiber prototypes, based on the detection of Cherenkov light from showering particles, demonstrate a detector possessing all of the desirable characteristics for a forward calorimeter. A prototype for the CMS experiment consists of 0.3 mm diameter fibers embedded in a copper matrix. The response to high energy (10-375 GeV) electrons, pions, protons and muons, the light yield, energy and position resolutions, and signal uniformity and linearity, are discussed. The signal generation mechanism gives this type of detector unique properties, especially for the detection of hadronic showers: Narrow, shallow shower profiles, hermeticity and extremely fast signals. The implications for measurements in the high-rate, high-radiation LHC environment are discussed. (orig.)

  7. Electroweak Results from CMS

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    We present recent CMS measurements on electroweak boson production including single, double, and triple boson final states. Electroweak processes span many orders of magnitude in production cross section. Measurements of high-rate processes provide stringent tests of the standard model. In addition, rare triboson proceses and final states produced through vector boson scattering are newly accessible with the large integrated luminosity provided by the LHC. If new physics lies just beyond the reach of the LHC, its effects may manifest as enhancements to the high energy kinematics in mulitboson production. We present limits on new physics signatures using an effective field theory which models these modifications as modifications of electroweak gauge couplings. Since electroweak measurements will continue to benefit from the increasing integrated luminosity provided by the LHC, the future prospects of electroweak physics are discussed.

  8. CMS pixel upgrade project

    CERN Document Server

    INSPIRE-00575876

    2011-01-01

    The LHC machine at CERN finished its first year of pp collisions at a center of mass energy of 7 TeV. While the commissioning to exploit its full potential is still ongoing, there are plans to upgrade its components to reach instantaneous luminosities beyond the initial design value after 2016. A corresponding upgrade of the innermost part of the CMS detector, the pixel detector, is needed. A full replacement of the pixel detector is planned in 2016. It will not only address limitations of the present system at higher data rates, but will aggressively lower the amount of material inside the fiducial tracking volume which will lead to better tracking and b-tagging performance. This article gives an overview of the project and illuminates the motivations and expected improvements in the detector performance.

  9. QCD measurements with the CMS detector

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    In the first year of LHC data taking, CMS pursued a rich program of QCD physics. In the low-pt front, results on momentum-, pseudorapidity- and multiplicity distributions of charged and strange hadrons, underlying event observables, two particle rapidity correlations and Bose-Einstein correlations are presented. In the high-pt front, jet and photon cross-section measurements are reported on inclusive and di-object production, as well as ratios of 3/2 jet cross sections. Finally, the QCD multi-jet dynamics is explored with event-shapes variables, dijet azimuthal decorrelations and dijet angular distributions

  10. Distributed Computing on Gadgetron: A new paradigm for MRI reconstruction

    DEFF Research Database (Denmark)

    Xue, Hui; Kelmann, Peter; Inati, Souheil

    cloud computing. With this extension (named GT-Plus), any number of Gadgetron processes can run cooperatively across multiple computers. GT-Plus framework was deployed on Amazon EC2 cloud and NIH’s Biowulf system. We demonstrate that with the GT-Plus cloud, a multi-slice free-breathing myocardial cine...

  11. Integration of distributed computing into the drug discovery process.

    Science.gov (United States)

    von Korff, Modest; Rufener, Christian; Stritt, Manuel; Freyss, Joel; Bär, Roman; Sander, Thomas

    2011-02-01

    Grid computing offers an opportunity to gain massive computing power at low costs. We give a short introduction into the drug discovery process and exemplify the use of grid computing for image processing, docking and 3D pharmacophore descriptor calculations. The principle of a grid and its architecture are briefly explained. More emphasis is laid on the issues related to a company-wide grid installation and embedding the grid into the research process. The future of grid computing in drug discovery is discussed in the expert opinion section. Most needed, besides reliable algorithms to predict compound properties, is embedding the grid seamlessly into the discovery process. User friendly access to powerful algorithms without any restrictions, that is, by a limited number of licenses, has to be the goal of grid computing in drug discovery.

  12. Experience with a distributed computing system for magnetic field analysis

    International Nuclear Information System (INIS)

    Newman, M.J.

    1978-08-01

    The development of a general purpose computer system, THESEUS, is described the initial use for which has been magnetic field analysis. The system involves several computers connected by data links. Some are small computers with interactive graphics facilities and limited analysis capabilities, and others are large computers for batch execution of analysis programs with heavy processor demands. The system is highly modular for easy extension and highly portable for transfer to different computers. It can easily be adapted for a completely different application. It provides a highly efficient and flexible interface between magnet designers and specialised analysis programs. Both the advantages and problems experienced are highlighted, together with a mention of possible future developments. (U.K.)

  13. The CMS software performance at the start of data taking

    CERN Document Server

    Benelli, Gabriele

    2009-01-01

    The CMS software framework (CMSSW) is a complex project evolving very rapidly as the first LHC colliding beams approach. The computing requirements constrain performance in terms of CPU time, memory footprint and event size on disk to allow for planning and managing the computing infrastructure necessary to handle the needs of the experiment. A performance suite of tools has been developed to track all aspects of code performance, through the software release cycles, allowing for regression and guiding code development for optimization. In this talk, we describe the CMSSW performance suite tools used and present some sample performance results from the release integration process for the CMS software.

  14. Fourier coefficientes computation in two variables, a distributional version

    Directory of Open Access Journals (Sweden)

    Carlos Manuel Ulate R.

    2015-01-01

    Full Text Available The present article, by considering the distributional summations of Euler-Maclaurin and a suitable choice of the distribution, results in repre- sentations for the Fourier coefficients in two variables are obtained. These representations may be used for the numerical evaluation of coefficients.

  15. Ring interconnection for distributed memory automation and computing system

    Energy Technology Data Exchange (ETDEWEB)

    Vinogradov, V I [Inst. for Nuclear Research of the Russian Academy of Sciences, Moscow (Russian Federation)

    1996-12-31

    Problems of development of measurement, acquisition and central systems based on a distributed memory and a ring interface are discussed. It has been found that the RAM LINK-type protocol can be used for ringlet links in non-symmetrical distributed memory architecture multiprocessor system interaction. 5 refs.

  16. Fourier coefficientes computation in two variables, a distributional version

    OpenAIRE

    Carlos Manuel Ulate R.

    2015-01-01

    The present article, by considering the distributional summations of Euler-Maclaurin and a suitable choice of the distribution, results in repre- sentations for the Fourier coefficients in two variables are obtained. These representations may be used for the numerical evaluation of coefficients.

  17. CMS releases new batch of LHC open data

    CERN Document Server

    Achintya Rao

    2016-01-01

    CMS makes 300 TB of high-quality data from the LHC available to the public through the CERN Open Data Portal.   A CMS collision event as seen in the built-in event display on the CERN Open Data Portal (Image: CERN) The CMS collaboration has made 300 TB of high-quality data from the LHC available to the public through the CERN Open Data Portal. The collision data come in two types: The so-called “primary datasets” are in the same format used by the CMS Collaboration to perform research. The “derived datasets” on the other hand require a lot less computing power and can be readily analysed by university or even high-school students. Notably, CMS is also providing the simulated data generated with the same software version that should be used to analyse the primary datasets. Simulations play a crucial role in particle-physics research and CMS is also making available the protocols for generating the simulations that are provided. The data release is accompanie...

  18. Distributed Cognition (DCOG): Foundations for a Computational Associative Memory Model

    National Research Council Canada - National Science Library

    Eggleston, Robert G; McCreight, Katherine L

    2006-01-01

    .... In this report, we describe the foundations of a different type of computational architecture; one that we believe will be less susceptible to cognitive brittleness and can better scale to complex and ill-structured work domains...

  19. A Distributed Agent Architecture for a Computer Virus Immune System

    National Research Council Canada - National Science Library

    Harmer, Paul

    2000-01-01

    .... Information protection and information assurance are vital components required for achieving superiority in the Infosphere, but these goals are threatened by the exponential birth rate of new computer viruses...

  20. Building Trust and Confidentiality in Cloud computing Distributed ...

    African Journals Online (AJOL)

    2013-03-01

    Mar 1, 2013 ... Department of Computer Science, University of Port Harcourt, Rivers State ... considering the security and privacy of the information stored and processed within the cloud. .... protection (perhaps access control), through to.