WorldWideScience

Sample records for atlas ddm integration

  1. ATLAS DDM integration in ARC

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Cameron, David; Ellert, Mattias

    2008-01-01

    The Nordic Data Grid Facility (NDGF) consists of Grid resources running ARC middleware in Denmark, Finland, Norway and Sweden. These resources serve many virtual organisations and contribute a large fraction of total worldwide resources for the ATLAS experiment, whose data is distributed and mana......The Nordic Data Grid Facility (NDGF) consists of Grid resources running ARC middleware in Denmark, Finland, Norway and Sweden. These resources serve many virtual organisations and contribute a large fraction of total worldwide resources for the ATLAS experiment, whose data is distributed...... and managed by the DQ2 software. Managing ATLAS data within NDGF and between NDGF and other Grids used by ATLAS (the Enabling Grids for E-sciencE Grid and the Open Science Grid) presents a unique challenge for several reasons. Firstly, the entry point for data, the Tier 1 centre, is physically distributed...

  2. ATLAS DDM integration in ARC

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Cameron, David; Ellert, Mattias

    by the DQ2 software. Managing ATLAS data within NDGF and between NDGF and other Grids used by ATLAS (the LHC Computing Grid and the Open Science Grid) presents a unique challenge for several reasons. Firstly, the entry point for data, the Tier 1 centre, is physically distributed among heterogeneous...

  3. ATLAS DDM integration in ARC

    International Nuclear Information System (INIS)

    Behrmann, G; Cameron, D; Ellert, M; Kleist, J; Taga, A

    2008-01-01

    The Nordic Data Grid Facility (NDGF) consists of Grid resources running ARC middleware in Denmark, Finland, Norway and Sweden. These resources serve many virtual organisations and contribute a large fraction of total worldwide resources for the ATLAS experiment, whose data is distributed and managed by the DQ2 software. Managing ATLAS data within NDGF and between NDGF and other Grids used by ATLAS (the Enabling Grids for E-sciencE Grid and the Open Science Grid) presents a unique challenge for several reasons. Firstly, the entry point for data, the Tier 1 centre, is physically distributed among heterogeneous resources in several countries and yet must present a single access point for all data stored within the centre. The middleware framework used in NDGF differs significantly from other Grids, specifically in the way that all data movement and registration is performed by services outside the worker node environment. Also, the service used for cataloging the location of data files is different from other Grids but must still be useable by DQ2 and ATLAS users to locate data within NDGF. This paper presents in detail how we solve these issues to allow seamless access worldwide to data within NDGF

  4. The ATLAS DDM Tracer monitoring framework

    CERN Document Server

    ZANG, D; The ATLAS collaboration; BARISITS, M; LASSNIG, M; Andrew STEWART, G; MOLFETAS, A; BEERMANN, T

    2012-01-01

    The DDM Tracer Service is aimed to trace and monitor the atlas file operations on the Worldwide LHC Computing Grid. The volume of traces has increased significantly since the service started in 2009. Now there are about ~5 million trace messages every day and peaks of greater than 250Hz, with peak rates continuing to climb, which gives the current service structure a big challenge. Analysis of large datasets based on on-demand queries to the relational database management system (RDBMS), i.e. Oracle, can be problematic, and have a significant effect on the database's performance. Consequently, We have investigated some new high availability technologies like messaging infrastructure, specifically ActiveMQ, and key-value stores. The advantages of key value store technology are that they are distributed and have high scalability; also their write performances are usually much better than RDBMS, all of which are very useful for the Tracer service. Indexes and distributed counters have been also tested to improve...

  5. Integration of the ATLAS tag database with data management and analysis components

    Energy Technology Data Exchange (ETDEWEB)

    Cranshaw, J; Malon, D [Argonne National Laboratory, Argonne, IL 60439 (United States); Doyle, A T; Kenyon, M J; McGlone, H; Nicholson, C [Department of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ, Scotland (United Kingdom)], E-mail: c.nicholson@physics.gla.ac.uk

    2008-07-15

    The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be most useful it needs to be integrated with the distributed data management (DDM) and distributed analysis (DA) components. This means addressing the issue that the DDM system at ATLAS groups files into datasets for scalability and usability, whereas the Tag Database points to events in files. It also means setting up a system which could prepare a list of input events and use both the DDM and DA systems to run a set of jobs. The ATLAS Tag Navigator Tool (TNT) has been developed to address these issues in an integrated way and provide a tool that the average physicist can use. Here, the current status of this work is presented and areas of future work are highlighted.

  6. Integration of the ATLAS tag database with data management and analysis components

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Doyle, A T; Kenyon, M J; McGlone, H; Nicholson, C

    2008-01-01

    The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be most useful it needs to be integrated with the distributed data management (DDM) and distributed analysis (DA) components. This means addressing the issue that the DDM system at ATLAS groups files into datasets for scalability and usability, whereas the Tag Database points to events in files. It also means setting up a system which could prepare a list of input events and use both the DDM and DA systems to run a set of jobs. The ATLAS Tag Navigator Tool (TNT) has been developed to address these issues in an integrated way and provide a tool that the average physicist can use. Here, the current status of this work is presented and areas of future work are highlighted

  7. The ATLAS DDM Tracer monitoring framework

    International Nuclear Information System (INIS)

    Zang Dongsong; Garonne, Vincent; Barisits, Martin; Lassnig, Mario; Andrew Stewart, Graeme; Molfetas, Angelos; Beermann, Thomas

    2012-01-01

    The DDM Tracer monitoring framework is aimed to trace and monitor the ATLAS file operations on the Worldwide LHC Computing Grid. The volume of traces has increased significantly since the framework was put in production in 2009. Now there are about 5 million trace messages every day and peaks can be near 250Hz, with peak rates continuing to climb, which gives the current structure a big challenge. Analysis of large datasets based on on-demand queries to the relational database management system (RDBMS), i.e. Oracle, can be problematic, and have a significant effect on the database's performance. Consequently, We have investigated some new high availability technologies like messaging infrastructure, specifically ActiveMQ, and key-value stores. The advantages of key value store technology are that they are distributed and have high scalability; also their write performances are usually much better than RDBMS, all of which are very useful for the Tracer monitoring framework. Indexes and distributed counters have been also tested to improve query performance and provided almost real time results. In this paper, the design principles, architecture and main characteristics of Tracer monitoring framework will be described and examples of its usage will be presented.

  8. DDM Workload Emulation

    CERN Document Server

    Vigne, R; The ATLAS collaboration; Garonne, V; Stewart, G; Barisits, M; Beermann, T; Lassnig, M; Serfon, C; Goossens, L; Nairz, A

    2013-01-01

    Rucio is the successor of the current Don Quijote 2 (DQ2) system for the distributed data management (DDM) system of the ATLAS experiment. The reasons for replacing DQ2 are manifold, but besides high maintenance costs and architectural limitations, scalability concerns are on top of the list. Current expectations are that the amount of data will be three to four times as it is today by the end of 2014. Further is the availability of more powerful computing resources pushing additional pressure on the DDM system as it increases the demands on data provisioning. Although DQ2 is capable of handling the current workload, it is already at its limits. To ensure that Rucio will be up to the expected workload, a way to emulate it is needed. To do so, first the current workload, observed in DQ2, must be understood in order to scale it up to future expectations. The paper discusses how selected core concepts are applied to the workload of the experiment and how knowledge about the current workload is derived from vario...

  9. DDM Workload Emulation

    CERN Document Server

    Vigne, R; The ATLAS collaboration; Garonne, V; Stewart, G; Barisits, M; Beermann, T; Serfon, C; Goossens, L; Nairz, A

    2014-01-01

    Rucio is the successor of the current Don Quijote 2 (DQ2) system for the distributed data management (DDM) system of the ATLAS experiment. The reasons for replacing DQ2 are manifold, but besides high maintenance costs and architectural limitations, scalability concerns are on top of the list. Current expectations are that the amount of data will be three to four times as it is today by the end of 2014. Further is the availability of more powerful computing resources pushing additional pressure on the DDM system as it increases the demands on data provisioning. Although DQ2 is capable of handling the current workload, it is already at its limits. To ensure that Rucio will be up to the expected workload, a way to emulate it is needed. To do so, first the current workload, observed in DQ2, must be understood in order to scale it up to future expectations. The paper discusses how selected core concepts are applied to the workload of the experiment and how knowledge about the current workload is derived from vario...

  10. DDM Workload Emulation

    Science.gov (United States)

    Vigne, R.; Schikuta, E.; Garonne, V.; Stewart, G.; Barisits, M.; Beermann, T.; Lassnig, M.; Serfon, C.; Goossens, L.; Nairz, A.; Atlas Collaboration

    2014-06-01

    Rucio is the successor of the current Don Quijote 2 (DQ2) system for the distributed data management (DDM) system of the ATLAS experiment. The reasons for replacing DQ2 are manifold, but besides high maintenance costs and architectural limitations, scalability concerns are on top of the list. Current expectations are that the amount of data will be three to four times as it is today by the end of 2014. Further is the availability of more powerful computing resources pushing additional pressure on the DDM system as it increases the demands on data provisioning. Although DQ2 is capable of handling the current workload, it is already at its limits. To ensure that Rucio will be up to the expected workload, a way to emulate it is needed. To do so, first the current workload, observed in DQ2, must be understood in order to scale it up to future expectations. The paper discusses how selected core concepts are applied to the workload of the experiment and how knowledge about the current workload is derived from various sources (e.g. analysing the central file catalogue logs). Finally a description of the implemented emulation framework, used for stress-testing Rucio, is given.

  11. DDM workload emulation

    International Nuclear Information System (INIS)

    Vigne, R; Schikuta, E; Garonne, V; Stewart, G; Barisits, M; Beermann, T; Lassnig, M; Serfon, C; Goossens, L; Nairz, A

    2014-01-01

    Rucio is the successor of the current Don Quijote 2 (DQ2) system for the distributed data management (DDM) system of the ATLAS experiment. The reasons for replacing DQ2 are manifold, but besides high maintenance costs and architectural limitations, scalability concerns are on top of the list. Current expectations are that the amount of data will be three to four times as it is today by the end of 2014. Further is the availability of more powerful computing resources pushing additional pressure on the DDM system as it increases the demands on data provisioning. Although DQ2 is capable of handling the current workload, it is already at its limits. To ensure that Rucio will be up to the expected workload, a way to emulate it is needed. To do so, first the current workload, observed in DQ2, must be understood in order to scale it up to future expectations. The paper discusses how selected core concepts are applied to the workload of the experiment and how knowledge about the current workload is derived from various sources (e.g. analysing the central file catalogue logs). Finally a description of the implemented emulation framework, used for stress-testing Rucio, is given.

  12. Experiences with the new ATLAS Distributed Data Management System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00214543; The ATLAS collaboration

    2016-01-01

    The ATLAS Distributed Data Management (DDM) system has evolved drastically in the last two years with the Rucio software fully replacing the previous system before the start of LHC Run-2. The ATLAS DDM system manages now more than 200 petabytes spread on 130 storage sites and can handle file transfer rates of up to 30Hz. In this talk, we discuss our experience acquired in developing, commissioning, running and maintaining such a large system. First, we describe the general architecture of the system, our integration with external services like the WLCG File Transfer Service and the evolution of the system over its first year of production. Then, we show the performance of the system, describe the integration of new technologies such as object stores, and outline future developments which mainly focus on performance and automation. Finally we discuss the long term evolution of ATLAS data management.

  13. Experiences with the new ATLAS Distributed Data Management System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00214543; The ATLAS collaboration; Serfon, Cedric; Barisits, Martin-Stefan; Lassnig, Mario; Beermann, Thomas; Guan, Wen

    2017-01-01

    The ATLAS Distributed Data Management (DDM) system has evolved drastically in the last two years with the Rucio software fully replacing the previous system before the start of LHC Run-2. The ATLAS DDM system manages now more than 250 petabytes spread on 130 storage sites and can handle file transfer rates of up to 30Hz. In this paper, we discuss our experience acquired in developing, commissioning, running and maintaining such a large system. First, we describe the general architecture of the system, our integration with external services like the WLCG File Transfer Service and the evolution of the system over its first years of production. Then, we show the performance of the system, describe the integration of new technologies such as object stores, and outline some new developments, which mainly focus on performance and automation.

  14. Rucio, the next-generation Data Management system in ATLAS

    CERN Document Server

    Serfon, C; The ATLAS collaboration; Beermann, T; Garonne, V; Goossens, L; Lassnig, M; Nairz, A; Vigne, R

    2014-01-01

    Rucio is the next-generation of Distributed Data Management (DDM) system benefiting from recent advances in cloud and "Big Data" computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 160 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio addresses these issues by relying on new technologies to ensure system scalability, cover new user requirements and employ new automation framework to reduce operational overheads. In this talk, we will present the history of the DDM project and the experience of data management operation in ATLAS computing. Thus, We will show the key concepts of Rucio, including its data organization. The Rucio design, and the technology it e...

  15. Sleep patterns, sleep disorders and mammographic density in spanish women: The DDM-Spain/Var-DDM study.

    Science.gov (United States)

    Pedraza-Flechas, Ana María; Lope, Virginia; Moreo, Pilar; Ascunce, Nieves; Miranda-García, Josefa; Vidal, Carmen; Sánchez-Contador, Carmen; Santamariña, Carmen; Pedraz-Pingarrón, Carmen; Llobet, Rafael; Aragonés, Nuria; Salas-Trejo, Dolores; Pollán, Marina; Pérez-Gómez, Beatriz

    2017-05-01

    We explored the relationship between sleep patterns and sleep disorders and mammographic density (MD), a marker of breast cancer risk. Participants in the DDM-Spain/var-DDM study, which included 2878 middle-aged Spanish women, were interviewed via telephone and asked questions on sleep characteristics. Two radiologists assessed MD in their left craneo-caudal mammogram, assisted by a validated semiautomatic-computer tool (DM-scan). We used log-transformed percentage MD as the dependent variable and fitted mixed linear regression models, including known confounding variables. Our results showed that neither sleeping patterns nor sleep disorders were associated with MD. However, women with frequent changes in their bedtime due to anxiety or depression had higher MD (e β :1.53;95%CI:1.04-2.26). Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Rucio - The next generation large scale distributed system for ATLAS Data Management

    CERN Document Server

    Beermann, T; The ATLAS collaboration; Lassnig, M; Barisits, M; Vigne, R; Serfon, C; Stewart, G A; Goossens, L; Nairz, A; Molfetas, A

    2014-01-01

    Rucio is the next-generation Distributed Data Management (DDM) system benefiting from recent advances in cloud and "Big Data" computing to address the ATLAS experiment scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 150 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will deal with these issues by relying on new technologies to ensure system scalability, address new user requirements and employ a new automation framework to reduce operational overheads.

  17. Next Generation PanDA Pilot for ATLAS and Other Experiments

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Caballero Bejar, J; De, K; Hover, J; Love, P; Maeno, T; Medrano Llamas, R; Walker, R; Wenaus, T

    2013-01-01

    The Production and Distributed Analysis system (PanDA) has been in use in the ATLAS Experiment since 2005. It uses a sophisticated pilot system to execute submitted jobs on the worker nodes. While originally designed for ATLAS, the PanDA Pilot has recently been refactored to facilitate use outside of ATLAS. Experiments are now handled as plug-ins such that a new PanDA Pilot user only has to implement a set of prototyped methods in the plug-in classes, and provide a script that configures and runs the experiment specific payload. We will give an overview of the Next Generation PanDA Pilot system and will present major features and recent improvements including live user payload debugging, data access via the Federated XRootD system, stage-out to alternative storage elements, support for the new ATLAS DDM system (Rucio), and an improved integration with glExec, as well as a description of the experiment specific plug-in classes. The performance of the pilot system in processing LHC data on the OSG, LCG and Nord...

  18. Next Generation PanDA Pilot for ATLAS and Other Experiments

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Caballero Bejar, J; De, K; Hover, J; Love, P; Maeno, T; Medrano Llamas, R; Walker, R; Wenaus, T

    2014-01-01

    The Production and Distributed Analysis system (PanDA) has been in use in the ATLAS Experiment since 2005. It uses a sophisticated pilot system to execute submitted jobs on the worker nodes. While originally designed for ATLAS, the PanDA Pilot has recently been refactored to facilitate use outside of ATLAS. Experiments are now handled as plug-ins such that a new PanDA Pilot user only has to implement a set of prototyped methods in the plug-in classes, and provide a script that configures and runs the experiment specific payload. We will give an overview of the Next Generation PanDA Pilot system and will present major features and recent improvements including live user payload debugging, data access via the Federated XRootD system, stage-out to alternative storage elements, support for the new ATLAS DDM system (Rucio), and an improved integration with glExec, as well as a description of the experiment specific plug-in classes. The performance of the pilot system in processing LHC data on the OSG, LCG and Nord...

  19. Next generation PanDA pilot for ATLAS and other experiments

    International Nuclear Information System (INIS)

    Nilsson, P; De, K; Megino, F Barreiro; Llamas, R Medrano; Bejar, J Caballero; Hover, J; Maeno, T; Wenaus, T; Love, P; Walker, R

    2014-01-01

    The Production and Distributed Analysis system (PanDA) has been in use in the ATLAS Experiment since 2005. It uses a sophisticated pilot system to execute submitted jobs on the worker nodes. While originally designed for ATLAS, the PanDA Pilot has recently been refactored to facilitate use outside of ATLAS. Experiments are now handled as plug-ins such that a new PanDA Pilot user only has to implement a set of prototyped methods in the plug-in classes, and provide a script that configures and runs the experiment-specific payload. We will give an overview of the Next Generation PanDA Pilot system and will present major features and recent improvements including live user payload debugging, data access via the Federated XRootD system, stage-out to alternative storage elements, support for the new ATLAS DDM system (Rucio), and an improved integration with glExec, as well as a description of the experiment-specific plug-in classes. The performance of the pilot system in processing LHC data on the OSG, LCG and Nordugrid infrastructures used by ATLAS will also be presented. We will describe plans for future development on the time scale of the next few years.

  20. Machine Learning for ATLAS DDM Network Metrics

    CERN Document Server

    Lassnig, Mario; The ATLAS collaboration; Vamosi, Ralf

    2016-01-01

    The increasing volume of physics data is posing a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from our ongoing automation efforts. First, we describe our framework for distributed data management and network metrics, automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for network-aware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.

  1. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  2. HappyFace-progress and future development for the ATLAS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kawamura, Gen; Magradze, Erekle; Musheghyan, Haykuhi; Nadal, Jordi; Quadt, Arnulf; Rzehorz, Gerhard [II. Physikalisches Institut, Georg-August-Universitat (Germany); Collaboration: ATLAS-Collaboration

    2015-07-01

    Nowadays, the HappyFace project aggregates, processes and stores information from different grid monitoring resources as well as from the grid system itself into the common database and displays status information through a single interface. The new implementation and architecture of HappyFace, the so-called grid-enabled HappyFace, provides direct access to the grid infrastructure. Different grid-enabled modules, to view datasets of the ATLAS Distributed Data Management system (DDM), to connect to the Ganga job monitoring system and to check the performance of grid transfers among the grid sites have been implemented. The new HappyFace system has been successfully integrated. It now displays the information and the status of both the monitoring resources and the direct access to the grid user applications and the grid collective services in the ATLAS computing system.

  3. Popularity Prediction Tool for ATLAS Distributed Data Management

    CERN Document Server

    Beermann, T; The ATLAS collaboration; Stewart, G; Lassnig, M; Garonne, V; Barisits, M; Vigne, R; Serfon, C; Goossens, L; Nairz, A; Molfetas, A

    2013-01-01

    This paper describes a popularity prediction tool for data-intensive data management systems, such as ATLAS distributed data management (DDM). It is fed by the DDM popularity system, which produces historical reports about ATLAS data usage, providing information about files, datasets, users and sites where data was accessed. The tool described in this contribution uses this historical information to make a prediction about the future popularity of data. It finds trends in the usage of data using a set of neural networks and a set of input parameters and predicts the number of accesses in the near term future. This information can then be used in a second step to improve the distribution of replicas at sites, taking into account the cost of creating new replicas (bandwidth and load on the storage system) compared to gain of having new ones (faster access of data for analysis). To evaluate the benefit of the redistribution a grid simulator is introduced that is able replay real workload on different data distri...

  4. Popularity Prediction Tool for ATLAS Distributed Data Management

    CERN Document Server

    Beermann, T; The ATLAS collaboration; Stewart, G; Lassnig, M; Garonne, V; Barisits, M; Vigne, R; Serfon, C; Goossens, L; Nairz, A; Molfetas, A

    2014-01-01

    This paper describes a popularity prediction tool for data-intensive data management systems, such as ATLAS distributed data management (DDM). It is fed by the DDM popularity system, which produces historical reports about ATLAS data usage, providing information about files, datasets, users and sites where data was accessed. The tool described in this contribution uses this historical information to make a prediction about the future popularity of data. It finds trends in the usage of data using a set of neural networks and a set of input parameters and predicts the number of accesses in the near term future. This information can then be used in a second step to improve the distribution of replicas at sites, taking into account the cost of creating new replicas (bandwidth and load on the storage system) compared to gain of having new ones (faster access of data for analysis). To evaluate the benefit of the redistribution a grid simulator is introduced that is able replay real workload on different data distri...

  5. Atlas – a data warehouse for integrative bioinformatics

    Directory of Open Access Journals (Sweden)

    Yuen Macaire MS

    2005-02-01

    Full Text Available Abstract Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL calls that are implemented in a set of Application Programming Interfaces (APIs. The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD, Biomolecular Interaction Network Database (BIND, Database of Interacting Proteins (DIP, Molecular Interactions Database (MINT, IntAct, NCBI Taxonomy, Gene Ontology (GO, Online Mendelian Inheritance in Man (OMIM, LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First

  6. Networks in ATLAS

    Science.gov (United States)

    McKee, Shawn; ATLAS Collaboration

    2017-10-01

    Networks have played a critical role in high-energy physics (HEP), enabling us to access and effectively utilize globally distributed resources to meet the needs of our physicists. Because of their importance in enabling our grid computing infrastructure many physicists have taken leading roles in research and education (R&E) networking, participating in, and even convening, network related meetings and research programs with the broader networking community worldwide. This has led to HEP benefiting from excellent global networking capabilities for little to no direct cost. However, as other science domains ramp-up their need for similar networking it becomes less clear that this situation will continue unchanged. What this means for ATLAS in particular needs to be understood. ATLAS has evolved its computing model since the LHC started based upon its experience with using globally distributed resources. The most significant theme of those changes has been increased reliance upon, and use of, its networks. We will report on a number of networking initiatives in ATLAS including participation in the global perfSONAR network monitoring and measuring efforts of WLCG and OSG, the collaboration with the LHCOPN/LHCONE effort, the integration of network awareness into PanDA, the use of the evolving ATLAS analytics framework to better understand our networks and the changes in our DDM system to allow remote access to data. We will also discuss new efforts underway that are exploring the inclusion and use of software defined networks (SDN) and how ATLAS might benefit from: • Orchestration and optimization of distributed data access and data movement. • Better control of workflows, end to end. • Enabling prioritization of time-critical vs normal tasks • Improvements in the efficiency of resource usage

  7. The ATLAS Distributed Data Management project: Past and Future

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Lassnig, M; Molfetas, A; Barisits, M; Beermann, T; Nairz, A; Goossens, L; Barreiro Megino, F; Serfon, C; Oleynik, D; Petrosyan, A

    2012-01-01

    ATLAS has recorded almost 8PB of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 90PB is currently stored in the Worldwide LHC Computing Grid by ATLAS. All this data is managed by the ATLAS Distributed Data Management system, called Don Quijote 2 (DQ2). DQ2 has evolved rapidly to help ATLAS Computing operations manage these large quantities of data across the many grid sites at which ATLAS runs, and to help ATLAS physicists get access to this data. In this paper, we describe new and improved DQ2 services, and the experience of data management operation in ATLAS computing, showing how these services enable the management of petabyte scale computing operations. We also present the concepts of the new version of the ATLAS Distributed Data Management (DDM) system, Rucio.

  8. The ATLAS Distributed Data Management project: Past and Future

    International Nuclear Information System (INIS)

    Garonne, Vincent; Stewart, Graeme A; Lassnig, Mario; Molfetas, Angelos; Barisits, Martin; Beermann, Thomas; Nairz, Armin; Goossens, Luc; Barreiro Megino, Fernando; Serfon, Cedric; Oleynik, Danila; Petrosyan, Artem

    2012-01-01

    ATLAS has recorded more than 8 petabyte(PB) of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 90PB are currently stored in the Worldwide LHC Computing Grid by ATLAS. All these data are managed by the ATLAS Distributed Data Management system, called Don Quijote 2 (DQ2). DQ2 has evolved rapidly to help ATLAS Computing operations manage these large quantities of data across the many grid sites at which ATLAS runs, and to help ATLAS physicists get access to these data. In this paper, we describe new and improved DQ2 services, and the experience of data management operation in ATLAS computing, showing how these services enable the management of PB scale computing operations. We also present the concepts of the new version of the ATLAS Distributed Data Management (DDM) system, Rucio.

  9. The ATLAS Distributed Data Management project: Past and Future

    Science.gov (United States)

    Garonne, Vincent; Stewart, Graeme A.; Lassnig, Mario; Molfetas, Angelos; Barisits, Martin; Beermann, Thomas; Nairz, Armin; Goossens, Luc; Barreiro Megino, Fernando; Serfon, Cedric; Oleynik, Danila; Petrosyan, Artem

    2012-12-01

    ATLAS has recorded more than 8 petabyte(PB) of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 90PB are currently stored in the Worldwide LHC Computing Grid by ATLAS. All these data are managed by the ATLAS Distributed Data Management system, called Don Quijote 2 (DQ2). DQ2 has evolved rapidly to help ATLAS Computing operations manage these large quantities of data across the many grid sites at which ATLAS runs, and to help ATLAS physicists get access to these data. In this paper, we describe new and improved DQ2 services, and the experience of data management operation in ATLAS computing, showing how these services enable the management of PB scale computing operations. We also present the concepts of the new version of the ATLAS Distributed Data Management (DDM) system, Rucio.

  10. Popularity Prediction Tool for ATLAS Distributed Data Management

    Science.gov (United States)

    Beermann, T.; Maettig, P.; Stewart, G.; Lassnig, M.; Garonne, V.; Barisits, M.; Vigne, R.; Serfon, C.; Goossens, L.; Nairz, A.; Molfetas, A.; Atlas Collaboration

    2014-06-01

    This paper describes a popularity prediction tool for data-intensive data management systems, such as ATLAS distributed data management (DDM). It is fed by the DDM popularity system, which produces historical reports about ATLAS data usage, providing information about files, datasets, users and sites where data was accessed. The tool described in this contribution uses this historical information to make a prediction about the future popularity of data. It finds trends in the usage of data using a set of neural networks and a set of input parameters and predicts the number of accesses in the near term future. This information can then be used in a second step to improve the distribution of replicas at sites, taking into account the cost of creating new replicas (bandwidth and load on the storage system) compared to gain of having new ones (faster access of data for analysis). To evaluate the benefit of the redistribution a grid simulator is introduced that is able replay real workload on different data distributions. This article describes the popularity prediction method and the simulator that is used to evaluate the redistribution.

  11. Popularity prediction tool for ATLAS distributed data management

    International Nuclear Information System (INIS)

    Beermann, T; Maettig, P; Stewart, G; Lassnig, M; Garonne, V; Barisits, M; Vigne, R; Serfon, C; Goossens, L; Nairz, A; Molfetas, A

    2014-01-01

    This paper describes a popularity prediction tool for data-intensive data management systems, such as ATLAS distributed data management (DDM). It is fed by the DDM popularity system, which produces historical reports about ATLAS data usage, providing information about files, datasets, users and sites where data was accessed. The tool described in this contribution uses this historical information to make a prediction about the future popularity of data. It finds trends in the usage of data using a set of neural networks and a set of input parameters and predicts the number of accesses in the near term future. This information can then be used in a second step to improve the distribution of replicas at sites, taking into account the cost of creating new replicas (bandwidth and load on the storage system) compared to gain of having new ones (faster access of data for analysis). To evaluate the benefit of the redistribution a grid simulator is introduced that is able replay real workload on different data distributions. This article describes the popularity prediction method and the simulator that is used to evaluate the redistribution.

  12. Rucio - The next generation of large scale distributed system for ATLAS Data Management

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Beermann, T; Goossens, L; Lassnig, M; Nairz, A; Stewart, GA; Vigne, V; Serfon, C

    2013-01-01

    Rucio is the next-generation Distributed Data Management(DDM) system benefiting from recent advances in cloud and "Big Data" computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 140 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will address these issues by relying on a conceptual data model and new technology to ensure system scalability, address new user requirements and employ new automation framework to reduce operational overheads. We present the key concepts of Rucio, including its data organization/representation and a model of how ATLAS central group and user activities will be managed. The Rucio design, and the technology it employs, is described...

  13. Rucio - The next generation of large scale distributed system for ATLAS Data Management

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Beermann, T; Goossens, L; Lassnig, M; Nairz, A; Stewart, GA; Vigne, V; Serfon, C

    2014-01-01

    Rucio is the next-generation Distributed Data Management(DDM) system benefiting from recent advances in cloud and ”Big Data” computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 140 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will address these issues by relying on a conceptual data model and new technology to ensure system scalability, address new user requirements and employ new automation framework to reduce operational overheads. We present the key concepts of Rucio, including its data organization/representation and a model of how ATLAS central group and user activities will be managed. The Rucio design, and the technology it employs, is descr...

  14. ATLAS DQ2 Deletion Service

    International Nuclear Information System (INIS)

    Oleynik, Danila; Petrosyan, Artem; Garonne, Vincent; Campana, Simone

    2012-01-01

    The ATLAS Distributed Data Management project DQ2 is responsible for the replication, access and bookkeeping of ATLAS data across more than 100 distributed grid sites. It also enforces data management policies decided on by the collaboration and defined in the ATLAS computing model. The DQ2 Deletion Service is one of the most important DDM services. This distributed service interacts with 3rd party grid middleware and the DQ2 catalogues to serve data deletion requests on the grid. Furthermore, it also takes care of retry strategies, check-pointing transactions, load management and fault tolerance. In this paper special attention is paid to the technical details which are used to achieve the high performance of service, accomplished without overloading either site storage, catalogues or other DQ2 components. Special attention is also paid to the deletion monitoring service that allows operators a detailed view of the working system.

  15. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria

    2016-01-01

    AGIS is the information system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing (ADC) applications and services. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others.

  16. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    Science.gov (United States)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  17. ATLAS Solenoid Integration

    CERN Multimedia

    Ruber, R

    Last month the central solenoid was installed in the barrel cryostat, which it shares with the liquid argon calorimeter. Figure 1: Some members of the solenoid and liquid argon teams proudly pose in front of the barrel cryosat, complete with detector and magnet. Some two years ago the central solenoid arrived at CERN after being manufactured and tested in Japan. It was kept in storage until last October when it was finally moved to the barrel cryostat integration area. Here a position survey of the solenoid (with respect to the cryostat's inner warm vessel) was performed. Figure 2: The alignment survey by Dirk Mergelkuhl and Aude Wiart. (EST-SU) At the start of the New Year the solenoid was moved to the cryostat insertion stand. Figure 3: The solenoid on the insertion stand, with Akira Yamamoto the solenoid designer and project leader. Figure 4: Taka Kondo, ATLAS Japan spokesperson, and Shoichi Mizumaki, Toshiba project engineer for the ATLAS solenoid, celebrate the insertion. Aft...

  18. A Roadmap to Continuous Integration for ATLAS Software Development

    Science.gov (United States)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration

    2017-10-01

    The ATLAS software infrastructure facilitates efforts of more than 1000 developers working on the code base of 2200 packages with 4 million lines of C++ and 1.4 million lines of python code. The ATLAS offline code management system is the powerful, flexible framework for processing new package versions requests, probing code changes in the Nightly Build System, migration to new platforms and compilers, deployment of production releases for worldwide access and supporting physicists with tools and interfaces for efficient software use. It maintains multi-stream, parallel development environment with about 70 multi-platform branches of nightly releases and provides vast opportunities for testing new packages, for verifying patches to existing software and for migrating to new platforms and compilers. The system evolution is currently aimed on the adoption of modern continuous integration (CI) practices focused on building nightly releases early and often, with rigorous unit and integration testing. This paper describes the CI incorporation program for the ATLAS software infrastructure. It brings modern open source tools such as Jenkins and GitLab into the ATLAS Nightly System, rationalizes hardware resource allocation and administrative operations, provides improved feedback and means to fix broken builds promptly for developers. Once adopted, ATLAS CI practices will improve and accelerate innovation cycles and result in increased confidence in new software deployments. The paper reports the status of Jenkins integration with the ATLAS Nightly System as well as short and long term plans for the incorporation of CI practices.

  19. Production and integration of the ATLAS Insertable B-Layer

    Science.gov (United States)

    Abbott, B.; Albert, J.; Alberti, F.; Alex, M.; Alimonti, G.; Alkire, S.; Allport, P.; Altenheiner, S.; Ancu, L. S.; Anderssen, E.; Andreani, A.; Andreazza, A.; Axen, B.; Arguin, J.; Backhaus, M.; Balbi, G.; Ballansat, J.; Barbero, M.; Barbier, G.; Bassalat, A.; Bates, R.; Baudin, P.; Battaglia, M.; Beau, T.; Beccherle, R.; Bell, A.; Benoit, M.; Bermgan, A.; Bertsche, C.; Bertsche, D.; Bilbao de Mendizabal, J.; Bindi, F.; Bomben, M.; Borri, M.; Bortolin, C.; Bousson, N.; Boyd, R. G.; Breugnon, P.; Bruni, G.; Brossamer, J.; Bruschi, M.; Buchholz, P.; Budun, E.; Buttar, C.; Cadoux, F.; Calderini, G.; Caminada, L.; Capeans, M.; Carney, R.; Casse, G.; Catinaccio, A.; Cavalli-Sforza, M.; Červ, M.; Cervelli, A.; Chau, C. C.; Chauveau, J.; Chen, S. P.; Chu, M.; Ciapetti, M.; Cindro, V.; Citterio, M.; Clark, A.; Cobal, M.; Coelli, S.; Collot, J.; Crespo-Lopez, O.; Dalla Betta, G. F.; Daly, C.; D'Amen, G.; Dann, N.; Dao, V.; Darbo, G.; DaVia, C.; David, P.; Debieux, S.; Delebecque, P.; De Lorenzi, F.; de Oliveira, R.; Dette, K.; Dietsche, W.; Di Girolamo, B.; Dinu, N.; Dittus, F.; Diyakov, D.; Djama, F.; Dobos, D.; Dondero, P.; Doonan, K.; Dopke, J.; Dorholt, O.; Dube, S.; Dzahini, D.; Egorov, K.; Ehrmann, O.; Einsweiler, K.; Elles, S.; Elsing, M.; Eraud, L.; Ereditato, A.; Eyring, A.; Falchieri, D.; Falou, A.; Fausten, C.; Favareto, A.; Favre, Y.; Feigl, S.; Fernandez Perez, S.; Ferrere, D.; Fleury, J.; Flick, T.; Forshaw, D.; Fougeron, D.; Franconi, L.; Gabrielli, A.; Gaglione, R.; Gallrapp, C.; Gan, K. K.; Garcia-Sciveres, M.; Gariano, G.; Gastaldi, T.; Gavrilenko, I.; Gaudiello, A.; Geffroy, N.; Gemme, C.; Gensolen, F.; George, M.; Ghislain, P.; Giangiacomi, N.; Gibson, S.; Giordani, M. P.; Giugni, D.; Gjersdal, H.; Glitza, K. W.; Gnani, D.; Godlewski, J.; Gonella, L.; Gonzalez-Sevilla, S.; Gorelov, I.; Gorišek, A.; Gössling, C.; Grancagnolo, S.; Gray, H.; Gregor, I.; Grenier, P.; Grinstein, S.; Gris, A.; Gromov, V.; Grondin, D.; Grosse-Knetter, J.; Guescini, F.; Guido, E.; Gutierrez, P.; Hallewell, G.; Hartman, N.; Hauck, S.; Hasi, J.; Hasib, A.; Hegner, F.; Heidbrink, S.; Heim, T.; Heinemann, B.; Hemperek, T.; Hessey, N. P.; Hetmánek, M.; Hinman, R. R.; Hoeferkamp, M.; Holmes, T.; Hostachy, J.; Hsu, S. C.; Hügging, F.; Husi, C.; Iacobucci, G.; Ibragimov, I.; Idarraga, J.; Ikegami, Y.; Ince, T.; Ishmukhametov, R.; Izen, J. M.; Janoška, Z.; Janssen, J.; Jansen, L.; Jeanty, L.; Jensen, F.; Jentzsch, J.; Jezequel, S.; Joseph, J.; Kagan, H.; Kagan, M.; Karagounis, M.; Kass, R.; Kastanas, A.; Kenney, C.; Kersten, S.; Kind, P.; Klein, M.; Klingenberg, R.; Kluit, R.; Kocian, M.; Koffeman, E.; Korchak, O.; Korolkov, I.; Kostyukhina-Visoven, I.; Kovalenko, S.; Kretz, M.; Krieger, N.; Krüger, H.; Kruth, A.; Kugel, A.; Kuykendall, W.; La Rosa, A.; Lai, C.; Lantzsch, K.; Lapoire, C.; Laporte, D.; Lari, T.; Latorre, S.; Leyton, M.; Lindquist, B.; Looper, K.; Lopez, I.; Lounis, A.; Lu, Y.; Lubatti, H. J.; Maeland, S.; Maier, A.; Mallik, U.; Manca, F.; Mandelli, B.; Mandić, I.; Marchand, D.; Marchiori, G.; Marx, M.; Massol, N.; Mättig, P.; Mayer, J.; McGoldrick, G.; Mekkaoui, A.; Menouni, M.; Menu, J.; Meroni, C.; Mesa, J.; Michal, S.; Miglioranzi, S.; Mikuž, M.; Miucci, A.; Mochizuki, K.; Monti, M.; Moore, J.; Morettini, P.; Morley, A.; Moss, J.; Muenstermann, D.; Murray, P.; Nakamura, K.; Nellist, C.; Nelson, D.; Nessi, M.; Nisius, R.; Nordberg, M.; Nuiry, F.; Obermann, T.; Ockenfels, W.; Oide, H.; Oriunno, M.; Ould-Saada, F.; Padilla, C.; Pangaud, P.; Parker, S.; Pelleriti, G.; Pernegger, H.; Piacquadio, G.; Picazio, A.; Pohl, D.; Polini, A.; Pons, X.; Popule, J.; Portell Bueso, X.; Potamianos, K.; Povoli, M.; Puldon, D.; Pylypchenko, Y.; Quadt, A.; Quayle, B.; Rarbi, F.; Ragusa, F.; Rambure, T.; Richards, E.; Riegel, C.; Ristic, B.; Rivière, F.; Rizatdinova, F.; RØhne, O.; Rossi, C.; Rossi, L. P.; Rovani, A.; Rozanov, A.; Rubinskiy, I.; Rudolph, M. S.; Rummler, A.; Ruscino, E.; Sabatini, F.; Salek, D.; Salzburger, A.; Sandaker, H.; Sannino, M.; Sanny, B.; Scanlon, T.; Schipper, J.; Schmidt, U.; Schneider, B.; Schorlemmer, A.; Schroer, N.; Schwemling, P.; Sciuccati, A.; Seidel, S.; Seiden, A.; Šícho, P.; Skubic, P.; Sloboda, M.; Smith, D. S.; Smith, M.; Sood, A.; Spencer, E.; Stramaglia, M.; Strauss, M.; Stucci, S.; Stugu, B.; Stupak, J.; Styles, N.; Su, D.; Takubo, Y.; Tassan, J.; Teng, P.; Teixeira, A.; Terzo, S.; Therry, X.; Todorov, T.; Tomášek, M.; Toms, K.; Travaglini, R.; Trischuk, W.; Troncon, C.; Troska, G.; Tsiskaridze, S.; Tsurin, I.; Tsybychev, D.; Unno, Y.; Vacavant, L.; Verlaat, B.; Vigeolas, E.; Vogt, M.; Vrba, V.; Vuillermet, R.; Wagner, W.; Walkowiak, W.; Wang, R.; Watts, S.; Weber, M. S.; Weber, M.; Weingarten, J.; Welch, S.; Wenig, S.; Wensing, M.; Wermes, N.; Wittig, T.; Wittgen, M.; Yildizkaya, T.; Yang, Y.; Yao, W.; Yi, Y.; Zaman, A.; Zaidan, R.; Zeitnitz, C.; Ziolkowski, M.; Zivkovic, V.; Zoccoli, A.; Zwalinski, L.

    2018-05-01

    During the shutdown of the CERN Large Hadron Collider in 2013-2014, an additional pixel layer was installed between the existing Pixel detector of the ATLAS experiment and a new, smaller radius beam pipe. The motivation for this new pixel layer, the Insertable B-Layer (IBL), was to maintain or improve the robustness and performance of the ATLAS tracking system, given the higher instantaneous and integrated luminosities realised following the shutdown. Because of the extreme radiation and collision rate environment, several new radiation-tolerant sensor and electronic technologies were utilised for this layer. This paper reports on the IBL construction and integration prior to its operation in the ATLAS detector.

  20. Rucio, the next-generation Data Management system in ATLAS

    Science.gov (United States)

    Serfon, C.; Barisits, M.; Beermann, T.; Garonne, V.; Goossens, L.; Lassnig, M.; Nairz, A.; Vigne, R.; ATLAS Collaboration

    2016-04-01

    Rucio is the next-generation of Distributed Data Management (DDM) system benefiting from recent advances in cloud and ;Big Data; computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quixote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 160 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio addresses these issues by relying on new technologies to ensure system scalability, cover new user requirements and employ new automation framework to reduce operational overheads. This paper shows the key concepts of Rucio, details the Rucio design, and the technology it employs, the tests that were conducted to validate it and finally describes the migration steps that were conducted to move from DQ2 to Rucio.

  1. Rucio, the next-generation Data Management system in ATLAS

    CERN Document Server

    Serfon, C; Beermann, T; Garonne, V; Goossens, L; Lassnig, M; Nairz, A; Vigne, R

    2016-01-01

    Rucio is the next-generation of Distributed Data Management (DDM) system benefiting from recent advances in cloud and "Big Data" computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 160 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio addresses these issues by relying on new technologies to ensure system scalability, cover new user requirements and employ new automation framework to reduce operational overheads. This paper shows the key concepts of Rucio, details the Rucio design, and the technology it employs, the tests that were conducted to validate it and finally describes the migration steps that were conducted to move from DQ2 to Rucio.

  2. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    OpenAIRE

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and s...

  3. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    Science.gov (United States)

    Campana, S.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  4. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    International Nuclear Information System (INIS)

    Campana, S

    2014-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R and D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  5. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00291854; The ATLAS collaboration; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computin...

  6. Integration of Detectors Into a Large Experiment: Examples From ATLAS and CMS

    CERN Document Server

    Froidevaux, D

    2011-01-01

    Integration of Detectors Into a Large Experiment: Examples From ATLAS andCMS, part of 'Landolt-Börnstein - Group I Elementary Particles, Nuclei and Atoms: Numerical Data and Functional Relationships in Science and Technology, Volume 21B2: Detectors for Particles and Radiation. Part 2: Systems and Applications'. This document is part of Part 2 'Principles and Methods' of Subvolume B 'Detectors for Particles and Radiation' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the Chapter '5 Integration of Detectors Into a Large Experiment: Examples From ATLAS and CMS' with the content: 5 Integration of Detectors Into a Large Experiment: Examples From ATLAS and CMS 5.1 Introduction 5.1.1 The context 5.1.2 The main initial physics goals of ATLAS and CMS at the LHC 5.1.3 A snapshot of the current status of the ATLAS and CMS experiments 5.2 Overall detector concept and magnet systems 5.2.1 Overall detector concept 5.2.2 Magnet systems 5.2.2.1 Rad...

  7. Cyberinfrastructure for the digital brain: spatial standards for integrating rodent brain atlases.

    Science.gov (United States)

    Zaslavsky, Ilya; Baldock, Richard A; Boline, Jyl

    2014-01-01

    Biomedical research entails capture and analysis of massive data volumes and new discoveries arise from data-integration and mining. This is only possible if data can be mapped onto a common framework such as the genome for genomic data. In neuroscience, the framework is intrinsically spatial and based on a number of paper atlases. This cannot meet today's data-intensive analysis and integration challenges. A scalable and extensible software infrastructure that is standards based but open for novel data and resources, is required for integrating information such as signal distributions, gene-expression, neuronal connectivity, electrophysiology, anatomy, and developmental processes. Therefore, the International Neuroinformatics Coordinating Facility (INCF) initiated the development of a spatial framework for neuroscience data integration with an associated Digital Atlasing Infrastructure (DAI). A prototype implementation of this infrastructure for the rodent brain is reported here. The infrastructure is based on a collection of reference spaces to which data is mapped at the required resolution, such as the Waxholm Space (WHS), a 3D reconstruction of the brain generated using high-resolution, multi-channel microMRI. The core standards of the digital atlasing service-oriented infrastructure include Waxholm Markup Language (WaxML): XML schema expressing a uniform information model for key elements such as coordinate systems, transformations, points of interest (POI)s, labels, and annotations; and Atlas Web Services: interfaces for querying and updating atlas data. The services return WaxML-encoded documents with information about capabilities, spatial reference systems (SRSs) and structures, and execute coordinate transformations and POI-based requests. Key elements of INCF-DAI cyberinfrastructure have been prototyped for both mouse and rat brain atlas sources, including the Allen Mouse Brain Atlas, UCSD Cell-Centered Database, and Edinburgh Mouse Atlas Project.

  8. Cyberinfrastructure for the digital brain: spatial standards for integrating rodent brain atlases

    Directory of Open Access Journals (Sweden)

    Ilya eZaslavsky

    2014-09-01

    Full Text Available Biomedical research entails capture and analysis of massive data volumes and new discoveries arise from data-integration and mining. This is only possible if data can be mapped onto a common framework such as the genome for genomic data. In neuroscience, the framework is intrinsically spatial and based on a number of paper atlases. This cannot meet today’s data-intensive analysis and integration challenges. A scalable and extensible software infrastructure that is standards based but open for novel data and resources, is required for integrating information such as signal distributions, gene-expression, neuronal connectivity, electrophysiology, anatomy, and developmental processes. Therefore, the International Neuroinformatics Coordinating Facility (INCF initiated the development of a spatial framework for neuroscience data integration with an associated Digital Atlasing Infrastructure (DAI. A prototype implementation of this infrastructure for the rodent brain is reported here. The infrastructure is based on a collection of reference spaces to which data is mapped at the required resolution, such as the Waxholm Space (WHS, a 3D reconstruction of the brain generated using high-resolution, multi-channel microMRI. The core standards of the digital atlasing service-oriented infrastructure include Waxholm Markup Language (WaxML: XML schema expressing a uniform information model for key elements such as coordinate systems, transformations, points of interest (POIs, labels, and annotations; and Atlas Web Services: interfaces for querying and updating atlas data. The services return WaxML-encoded documents with information about capabilities, spatial reference systems and structures, and execute coordinate transformations and POI-based requests. Key elements of INCF-DAI cyberinfrastructure have been prototyped for both mouse and rat brain atlas sources, including the Allen Mouse Brain Atlas, UCSD Cell-Centered Database, and Edinburgh Mouse Atlas

  9. Integration and test of the ATLAS Semiconductor Tracker

    CERN Document Server

    Pernegger, H

    2007-01-01

    The ATLAS Semiconductor Tracker (SCT) will be a central part of the tracking system of the ATLAS experiment and is one of the major new silicon detector systems for LHC. The paper summarizes the system integration of the SCT from individual components to the completed tracker barrel and endcaps ready for installation in the pit. Particular attention will be given to the test results obtained during the different integration steps: from single barrels and disks to the final tests inside the ID before installation in the pit. The tests provided us with operational experience for a significant fraction of the full detector system and showed the very good performance of the final assembled detector.

  10. arXiv Production and Integration of the ATLAS Insertable B-Layer

    CERN Document Server

    Abbott, B.; Alberti, F.; Alex, M.; Alimonti, G.; Alkire, S.; Allport, P.; Altenheiner, S.; Ancu, L.S.; Anderssen, E.; Andreani, A.; Andreazza, A.; Axen, B.; Arguin, J.; Backhaus, M.; Balbi, G.; Ballansat, J.; Barbero, M.; Barbier, G.; Bassalat, A.; Bates, R.; Baudin, P.; Battaglia, M.; Beau, T.; Beccherle, R.; Bell, A.; Benoit, M.; Bermgan, A.; Bertsche, C.; Bertsche, D.; Bilbao de Mendizabal, J.; Bindi, F.; Bomben, M.; Borri, M.; Bortolin, C.; Bousson, N.; Boyd, R.G.; Breugnon, P.; Bruni, G.; Brossamer, J.; Bruschi, M.; Buchholz, P.; Budun, E.; Buttar, C.; Cadoux, F.; Calderini, G.; Caminada, L.; Capeans, M.; Carney, R.; Casse, G.; Catinaccio, A.; Cavalli-Sforza, M.; Červ, M.; Cervelli, A.; Chau, C.C.; Chauveau, J.; Chen, S.P.; Chu, M.; Ciapetti, M.; Cindro, V.; Citterio, M.; Clark, A.; Cobal, M.; Coelli, S.; Collot, J.; Crespo-Lopez, O.; Dalla Betta, G.F.; Daly, C.; D'Amen, G.; Dann, N.; Dao, V.; Darbo, G.; DaVia, C.; David, P.; Debieux, S.; Delebecque, P.; De Lorenzi, F.; de Oliveira, R.; Dette, K.; Dietsche, W.; Di Girolamo, B.; Dinu, N.; Dittus, F.; Diyakov, D.; Djama, F.; Dobos, D.; Dondero, P.; Doonan, K.; Dopke, J.; Dorholt, O.; Dube, S.; Dzahini, D.; Egorov, K.; Ehrmann, O.; Einsweiler, K.; Elles, S.; Elsing, M.; Eraud, L.; Ereditato, A.; Eyring, A.; Falchieri, D.; Falou, A.; Fausten, C.; Favareto, A.; Favre, Y.; Feigl, S.; Fernandez Perez, S.; Ferrere, D.; Fleury, J.; Flick, T.; Forshaw, D.; Fougeron, D.; Franconi, L.; Gabrielli, A.; Gaglione, R.; Gallrapp, C.; Gan, K.K.; Garcia-Sciveres, M.; Gariano, G.; Gastaldi, T.; Gavrilenko, I.; Gaudiello, A.; Geffroy, N.; Gemme, C.; Gensolen, F.; George, M.; Ghislain, P.; Giangiacomi, N.; Gibson, S.; Giordani, M.P.; Giugni, D.; Gjersdal, H.; Glitza, K.W.; Gnani, D.; Godlewski, J.; Gonella, L.; Gonzalez-Sevilla, S.; Gorelov, I.; Gorišek, A.; Gössling, C.; Grancagnolo, S.; Gray, H.; Gregor, I.; Grenier, P.; Grinstein, S.; Gris, A.; Gromov, V.; Grondin, D.; Grosse-Knetter, J.; Guescini, F.; Guido, E.; Gutierrez, P.; Hallewell, G.; Hartman, N.; Hauck, S.; Hasi, J.; Hasib, A.; Hegner, F.; Heidbrink, S.; Heim, T.; Heinemann, B.; Hemperek, T.; Hessey, N.P.; Hetmánek, M.; Hinman, R.R.; Hoeferkamp, M.; Holmes, T.; Hostachy, J.; Hsu, S.C.; Hügging, F.; Husi, C.; Iacobucci, G.; Ibragimov, I.; Idarraga, J.; Ikegami, Y.; Ince, T.; Ishmukhametov, R.; Izen, J.M.; Janoška, Z.; Janssen, J.; Jansen, L.; Jeanty, L.; Jensen, F.; Jentzsch, J.; Jezequel, S.; Joseph, J.; Kagan, H.; Kagan, M.; Karagounis, M.; Kass, R.; Kastanas, A.; Kenney, C.; Kersten, S.; Kind, P.; Klein, M.; Klingenberg, R.; Kluit, R.; Kocian, M.; Koffeman, E.; Korchak, O.; Korolkov, I.; Kostyukhina-Visoven, I.; Kovalenko, S.; Kretz, M.; Krieger, N.; Krüger, H.; Kruth, A.; Kugel, A.; Kuykendall, W.; La Rosa, A.; Lai, C.; Lantzsch, K.; Lapoire, C.; Laporte, D.; Lari, T.; Latorre, S.; Leyton, M.; Lindquist, B.; Looper, K.; Lopez, I.; Lounis, A.; Lu, Y.; Lubatti, H.J.; Maeland, S.; Maier, A.; Mallik, U.; Manca, F.; Mandelli, B.; Mandić, I.; Marchand, D.; Marchiori, G.; Marx, M.; Massol, N.; Mättig, P.; Mayer, J.; Mc Goldrick, G.; Mekkaoui, A.; Menouni, M.; Menu, J.; Meroni, C.; Mesa, J.; Michal, S.; Miglioranzi, S.; Mikuž, M.; Miucci, A.; Mochizuki, K.; Monti, M.; Moore, J.; Morettini, P.; Morley, A.; Moss, J.; Muenstermann, D.; Murray, P.; Nakamura, K.; Nellist, C.; Nelson, D.; Nessi, M.; Nisius, R.; Nordberg, M.; Nuiry, F.; Obermann, T.; Ockenfels, W.; Oide, H.; Oriunno, M.; Ould-Saada, F.; Padilla, C.; Pangaud, P.; Parker, S.; Pelleriti, G.; Pernegger, H.; Piacquadio, G.; Picazio, A.; Pohl, D.; Polini, A.; Pons, X.; Popule, J.; Portell Bueso, X.; Potamianos, K.; Povoli, M.; Puldon, D.; Pylypchenko, Y.; Quadt, A.; Quayle, B.; Rarbi, F.; Ragusa, F.; Rambure, T.; Richards, E.; Riegel, C.; Ristic, B.; Rivière, F.; Rizatdinova, F.; Rø hne, O.; Rossi, C.; Rossi, L.P.; Rovani, A.; Rozanov, A.; Rubinskiy, I.; Rudolph, M.S.; Rummier, A.; Ruscino, E.; Sabatini, F.; Salek, D.; Salzburger, A.; Sandaker, H.; Sannino, M.; Sanny, B.; Scanlon, T.; Schipper, J.; Schmidt, U.; Schneider, B.; Schorlemmer, A.; Schroer, N.; Schwemling, P.; Sciuccati, A.; Seidel, S.; Seiden, A.; Šícho, P.; Skubic, P.; Sloboda, M.; Smith, D.S.; Smith, M.; Sood, A.; Spencer, E.; Stramaglia, M.; Strauss, M.; Stucci, S.; Stugu, B.; Stupak, J.; Styles, N.; Su, D.; Takubo, Y.; Tassan, J.; Teng, P.; Teixeira, A.; Terzo, S.; Therry, X.; Todorov, T.; Tomášek, M.; Toms, K.; Travaglini, R.; Trischuk, W.; Troncon, C.; Troska, G.; Tsiskaridze, S.; Tsurin, I.; Tsybychev, D.; Unno, Y.; Vacavant, L.; Verlaat, B.; Vigeolas, E.; Vogt, M.; Vrba, V.; Vuillermet, R.; Wagner, W.; Walkowiak, W.; Wang, R.; Watts, S.; Weber, M.S.; Weber, M.; Weingarten, J.; Welch, S.; Wenig, S.; Wensing, M.; Wermes, N.; Wittig, T.; Wittgen, M.; Yildizkaya, T.; Yang, Y.; Yao, W.; Yi, Y.; Zaman, A.; Zaidan, R.; Zeitnitz, C.; Ziolkowski, M.; Zivkovic, V.; Zoccoli, A.; Zwalinski, L.

    2018-05-16

    During the shutdown of the CERN Large Hadron Collider in 2013-2014, an additional pixel layer was installed between the existing Pixel detector of the ATLAS experiment and a new, smaller radius beam pipe. The motivation for this new pixel layer, the Insertable B-Layer (IBL), was to maintain or improve the robustness and performance of the ATLAS tracking system, given the higher instantaneous and integrated luminosities realised following the shutdown. Because of the extreme radiation and collision rate environment, several new radiation-tolerant sensor and electronic technologies were utilised for this layer. This paper reports on the IBL construction and integration prior to its operation in the ATLAS detector.

  11. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160; The ATLAS collaboration

    2016-01-01

    Fifteen Chinese High Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  12. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160

    2017-01-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  13. FIIND: Ferret Interactive Integrated Neurodevelopment Atlas

    Directory of Open Access Journals (Sweden)

    Roberto Toro

    2018-03-01

    Full Text Available The first days after birth in ferrets provide a privileged view of the development of a complex mammalian brain. Unlike mice, ferrets develop a rich pattern of deep neocortical folds and cortico- cortical connections. Unlike humans and other primates, whose brains are well differentiated and folded at birth, ferrets are born with a very immature and completely smooth neocortex: folds, neocortical regionalisation and cortico-cortical connectivity develop in ferrets during the first postnatal days. After a period of fast neocortical expansion, during which brain volume increases by up to a factor of 4 in 2 weeks, the ferret brain reaches its adult volume at about 6 weeks of age. Ferrets could thus become a major animal model to investigate the neurobiological correlates of the phenomena observed in human neuroimaging. Many of these phenomena, such as the relationship between brain folding, cortico-cortical connectivity and neocortical regionalisation cannot be investigated in mice, but could be investigated in ferrets. Our aim is to provide the research community with a detailed description of the development of a complex brain, necessary to better understand the nature of human neuroimaging data, create models of brain development, or analyse the relationship between multiple spatial scales. We have already started a project to constitute an open, collaborative atlas of ferret brain development, integrating multi-modal and multi-scale data. We have acquired data for 28 ferrets (4 animals per time point from P0 to adults, using high-resolution MRI and diffusion tensor imaging (DTI. We have developed an open-source pipeline to segment and produce – online – 3D reconstructions of brain MRI data. We propose to process the brains of 16 of our specimens (from P0 to P16 using high-throughput 3D histology, staining for cytoarchitectonic landmarks, neuronal progenitors and neurogenesis. This would allow us to relate the MRI data that we have already

  14. Major Achievements and Prospect of the ATLAS Integral Effect Tests

    International Nuclear Information System (INIS)

    Choi, K.; Kim, Y.; Song, C.; Baek, W.

    2012-01-01

    A large-scale thermal-hydraulic integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been operated by KAERI. The reference plant of ATLAS is the APR1400 (Advanced Power Reactor, 1400 MWe). Since 2007, an extensive series of experimental works were successfully carried out, including large break loss of coolant accident tests, small break loss of coolant accident tests at various break locations, steam generator tube rupture tests, feed line break tests, and steam line break tests. These tests contributed toward an understanding of the unique thermal-hydraulic behavior, resolving the safety-related concerns and providing validation data for evaluation of the safety analysis codes and methodology for the advanced pressurized water reactor, APR1400. Major discoveries and lessons found in the past integral effect tests are summarized in this paper. As the demand for integral effect tests is on the rise due to the active national nuclear R and D program in Korea, the future prospects of the application of the ATLAS facility are also discussed.

  15. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.

  16. Creation of computerized 3D MRI-integrated atlases of the human basal ganglia and thalamus

    Directory of Open Access Journals (Sweden)

    Abbas F. Sadikot

    2011-09-01

    Full Text Available Functional brain imaging and neurosurgery in subcortical areas often requires visualization of brain nuclei beyond the resolution of current Magnetic Resonance Imaging (MRI methods. We present techniques used to create: 1 a lower resolution 3D atlas, based on the Schaltenbrand and Wahren print atlas, which was integrated into a stereotactic neurosurgery planning and visualization platform (VIPER; and 2 a higher resolution 3D atlas derived from a single set of manually segmented histological slices containing nuclei of the basal ganglia, thalamus, basal forebrain and medial temporal lobe. Both atlases were integrated to a canonical MRI (Colin27 from a young male participant by manually identifying homologous landmarks. The lower resolution atlas was then warped to fit the MRI based on the identified landmarks. A pseudo-MRI representation of the high-resolution atlas was created, and a nonlinear transformation was calculated in order to match the atlas to the template MRI. The atlas can then be warped to match the anatomy of Parkinson’s disease surgical candidates by using 3D automated nonlinear deformation methods. By way of functional validation of the atlas, the location of the sensory thalamus was correlated with stereotactic intraoperative physiological data. The position of subthalamic electrode positions in patients with Parkinson’s disease was also evaluated in the atlas-integrated MRI space. Finally, probabilistic maps of subthalamic stimulation electrodes were developed, in order to allow group analysis of the location of contacts associated with the best motor outcomes. We have therefore developed, and are continuing to validate, a high-resolution computerized MRI-integrated 3D histological atlas, which is useful in functional neurosurgery, and for functional and anatomical studies of the human basal ganglia, thalamus and basal forebrain.

  17. Application of direct discrete method (DDM) to multigroup neutron transport problems

    International Nuclear Information System (INIS)

    Vosoughi, Naser; Salehi, Ali Akbar; Shahriari, Majid

    2003-01-01

    The Direct Discrete Method (DDM), which produced excellent results for one-group neutron transport problems, has been developed for multigroup energy. A multigroup neutron transport discrete equation has been produced for a cylindrical shape fuel element with and without associated coolant regions with two boundary conditions. The calculations are illustrated for two-group energy by graphs showing the fast and thermal fluxes. The validity of the results are tested against the results obtained by the ANISN code. (author)

  18. Rucio - The next generation of large scale distributed system for ATLAS Data Management

    Science.gov (United States)

    Garonne, V.; Vigne, R.; Stewart, G.; Barisits, M.; eermann, T. B.; Lassnig, M.; Serfon, C.; Goossens, L.; Nairz, A.; Atlas Collaboration

    2014-06-01

    Rucio is the next-generation Distributed Data Management (DDM) system benefiting from recent advances in cloud and "Big Data" computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 140 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will deal with these issues by relying on a conceptual data model and new technology to ensure system scalability, address new user requirements and employ new automation framework to reduce operational overheads. We present the key concepts of Rucio, including its data organization/representation and a model of how to manage central group and user activities. The Rucio design, and the technology it employs, is described, specifically looking at its RESTful architecture and the various software components it uses. We show also the performance of the system.

  19. A Fourth Order Formulation of DDM for Crack Analysis in Brittle Solids

    Directory of Open Access Journals (Sweden)

    Abolfazl Abdollahipour

    2017-01-01

    Full Text Available A fourth order formulation of the displacement discontinuity method (DDM is proposed for the crack analysis of brittle solids such as rocks, glasses, concretes and ceramics. A fourth order boundary collocation scheme is used for the discretization of each boundary element (the source element. In this approach, the source boundary element is divided into five sub-elements each recognized by a central node where the displacement discontinuity components are to be numerically evaluated. Three different formulating procedures are presented and their corresponding discretization schemes are discussed. A new discretization scheme is also proposed to use the fourth order formulation for the special crack tip elements which may be used to increase the accuracy of the stress and displacement fields near the crack ends. Therefore, these new crack tips discretizing schemes are also improved by using the proposed fourth order displacement discontinuity formulation and the corresponding shape functions for a bunch of five special crack tip elements. Some example problems in brittle fracture mechanics are solved for estimating the Mode I and Mode II stress intensity factors near the crack ends. These semi-analytical results are compared to those cited in the fracture mechanics literature whereby the high accuracy of the fourth order DDM formulation is demonstrated.

  20. The ATLAS Liquid Argon Calorimeter: Construction, Integration, Commissioning

    International Nuclear Information System (INIS)

    Aleksa, Martin

    2006-01-01

    The ATLAS liquid argon (LAr) calorimeter system consists of an electromagnetic barrel calorimeter and two end caps with electromagnetic, hadronic and forward calorimeters. The liquid argon sampling technique, with an accordion geometry was chosen for the barrel electromagnetic calorimeter (EMB) and adapted to the end cap (EMEC). The hadronic end cap calorimeter (HEC) uses a copper-liquid argon sampling technique with flat plate geometry and is subdivided in depth in two wheels per end-cap. Finally, the forward calorimeter (FCAL) is composed of three modules employing cylindrical electrodes with thin liquid argon gaps.The construction of the full calorimeter system is complete since mid-2004. Production modules constructed in the home institutes were integrated into wheels at CERN in 2003-2004, and inserted into the three cryostats. They passed their first complete cold test before the lowering into the ATLAS cavern. Results of quality checks (e.g. electrical, mechanical, ...) performed on all the 190304 read-out channels after cool down will be reported. End 2004 the ATLAS barrel electromagnetic (EM) calorimeter was installed in the ATLAS cavern and since summer 2005 the front-end electronics are being connected and tested. Results of this first commissioning phase will be shown to demonstrate the high standards of quality control for our detectors

  1. gLExec Integration with the ATLAS PanDA Workload Management System

    CERN Document Server

    Karavakis, Edward; Campana, Simone; De, Kaushik; Di Girolamo, Alessandro; Litmaath, Maarten; Maeno, Tadashi; Medrano Llamas, Ramon; Nilsson, Paul; Wenaus, Torre

    2015-01-01

    ATLAS user jobs are executed on Worker Nodes (WNs) by pilots sent to sites by pilot factories. This paradigm serves to allow a high job reliability and although it has clear advantages, such as making the working environment homogeneous, the approach presents security and traceability challenges. To address these challenges, gLExec can be used to let the payloads for each user be executed under a different UNIX user id that uniquely identifies the ATLAS user. This paper describes the recent improvements and evolution of the security model within the ATLAS PanDA system, including improvements in the PanDA pilot, in the PanDA server and their integration with MyProxy, a credential caching system that entitles a person or a service to act in the name of the issuer of the credential. Finally, it presents results from ATLAS user jobs running with gLExec and describes the deployment campaign within ATLAS.

  2. Integrating Networking into ATLAS

    CERN Document Server

    Mc Kee, Shawn Patrick; The ATLAS collaboration

    2018-01-01

    Networking is foundational to the ATLAS distributed infrastructure and there are many ongoing activities related to networking both within and outside of ATLAS. We will report on the progress in a number of areas exploring ATLAS's use of networking and our ability to monitor the network, analyze metrics from the network, and tune and optimize application and end-host parameters to make the most effective use of the network. Specific topics will include work on Open vSwitch for production systems, network analytics, FTS testing and tuning, and network problem alerting and alarming.

  3. A roadmap to continuous integration for ATLAS software development

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00132984; The ATLAS collaboration; Elmsheuser, Johannes; Obreshkov, Emil; Krasznahorkay, Attila

    2017-01-01

    The ATLAS software infrastructure facilitates efforts of more than 1000 developers working on the code base of 2200 packages with 4 million C++ and 1.4 million python lines. The ATLAS offline code management system is the powerful, flexible framework for processing new package versions requests, probing code changes in the Nightly Build System, migration to new platforms and compilers, deployment of production releases for worldwide access and supporting physicists with tools and interfaces for efficient software use. It maintains multi-stream, parallel development environment with about 70 multi-platform branches of nightly releases and provides vast opportunities for testing new packages, for verifying patches to existing software and for migrating to new platforms and compilers. The system evolution is currently aimed on the adoption of modern continuous integration (CI) practices focused on building nightly releases early and often, with rigorous unit and integration testing. This paper describes the CI ...

  4. A Roadmap to Continuous Integration for ATLAS Software Development

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration; Obreshkov, Emil; Undrus, Alexander

    2016-01-01

    The ATLAS software infrastructure facilitates efforts of more than 1000 developers working on the code base of 2200 packages with 4 million C++ and 1.4 million python lines. The ATLAS offline code management system is the powerful, flexible framework for processing new package versions requests, probing code changes in the Nightly Build System, migration to new platforms and compilers, deployment of production releases for worldwide access and supporting physicists with tools and interfaces for efficient software use. It maintains multi-stream, parallel development environment with about 70 multi-platform branches of nightly releases and provides vast opportunities for testing new packages, for verifying patches to existing software and for migrating to new platforms and compilers. The system evolution is currently aimed on the adoption of modern continuous integration (CI) practices focused on building nightly releases early and often, with rigorous unit and integration testing. This presentation describes t...

  5. An integrated overview of metadata in ATLAS

    International Nuclear Information System (INIS)

    Gallas, E J; Malon, D; Hawkings, R J; Albrand, S; Torrence, E

    2010-01-01

    Metadata (data about data) arise in many contexts, from many diverse sources, and at many levels in ATLAS. Familiar examples include run-level, luminosity-block-level, and event-level metadata, and, related to processing and organization, dataset-level and file-level metadata, but these categories are neither exhaustive nor orthogonal. Some metadata are known a priori, in advance of data taking or simulation; other metadata are known only after processing, and occasionally, quite late (e.g., detector status or quality updates that may appear after initial reconstruction is complete). Metadata that may seem relevant only internally to the distributed computing infrastructure under ordinary conditions may become relevant to physics analysis under error conditions ('What can I discover about data I failed to process?'). This talk provides an overview of metadata and metadata handling in ATLAS, and describes ongoing work to deliver integrated metadata services in support of physics analysis.

  6. Integration of Globus Online with the ATLAS PanDA Workload Management System

    CERN Document Server

    Contreras, C; The ATLAS collaboration; Maeno, T; Nilsson, P; Potekhin, M

    2012-01-01

    The PanDA Workload Management System is the basis for distributed production and analysis for the ATLAS experiment at the LHC. In this role, it relies on sophisticated dynamic data movement facilities developed in ATLAS. In certain scenarios, such as small research teams in ATLAS Tier-3 sites and non-ATLAS Virtual Organizations, the overhead of installation and operation of these components makes their use not very cost effective. Globus Online is an emerging new tool from the Globus Alliance, which already proved popular within the research community. It provides the users with fast and robust file transfer capabilities that can also be managed from a Web interface, and in addition to grid sites, can have individual workstations and laptops serving as data transmission endpoints. We will describe the integration of the Globus Online functionality into the PanDA suite of software, in order to give more flexibility in choosing the method of data transfer to ATLAS Tier-3 and OSG users.

  7. Integration of Globus Online with the ATLAS PanDA Workload Management System

    International Nuclear Information System (INIS)

    Contreras, C; Deng, W; Maeno, T; Potekhin, M; Nilsson, P

    2012-01-01

    The PanDA Workload Management System is the basis for distributed production and analysis for the ATLAS experiment at the LHC. In this role, it relies on sophisticated dynamic data movement facilities developed in ATLAS. In certain scenarios, such as small research teams in ATLAS Tier-3 sites and non-ATLAS Virtual Organizations, the overhead of installation and operation of these components makes their use not very cost effective. Globus Online is an emerging new tool from the Globus Alliance, which already proved popular within the research community. It provides the users with fast and robust file transfer capabilities that can also be managed from a Web interface, and in addition to grid sites, can have individual workstations and laptops serving as data transmission endpoints. We will describe the integration of the Globus Online functionality into the PanDA suite of software, in order to give more flexibility in choosing the method of data transfer to ATLAS Tier-3 and Open Science Grid (OSG) users.

  8. ATLAS, an integrated structural analysis and design system. Volume 4: Random access file catalog

    Science.gov (United States)

    Gray, F. P., Jr. (Editor)

    1979-01-01

    A complete catalog is presented for the random access files used by the ATLAS integrated structural analysis and design system. ATLAS consists of several technical computation modules which output data matrices to corresponding random access file. A description of the matrices written on these files is contained herein.

  9. Rucio – The next generation of large scale distributed system for ATLAS data management

    International Nuclear Information System (INIS)

    Garonne, V; Vigne, R; Stewart, G; Barisits, M; Eermann, T B; Lassnig, M; Serfon, C; Goossens, L; Nairz, A

    2014-01-01

    Rucio is the next-generation Distributed Data Management (DDM) system benefiting from recent advances in cloud and 'Big Data' computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 140 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will deal with these issues by relying on a conceptual data model and new technology to ensure system scalability, address new user requirements and employ new automation framework to reduce operational overheads. We present the key concepts of Rucio, including its data organization/representation and a model of how to manage central group and user activities. The Rucio design, and the technology it employs, is described, specifically looking at its RESTful architecture and the various software components it uses. We show also the performance of the system.

  10. Data integration through brain atlasing: Human Brain Project tools and strategies.

    Science.gov (United States)

    Bjerke, Ingvild E; Øvsthus, Martin; Papp, Eszter A; Yates, Sharon C; Silvestri, Ludovico; Fiorilli, Julien; Pennartz, Cyriel M A; Pavone, Francesco S; Puchades, Maja A; Leergaard, Trygve B; Bjaalie, Jan G

    2018-04-01

    The Human Brain Project (HBP), an EU Flagship Initiative, is currently building an infrastructure that will allow integration of large amounts of heterogeneous neuroscience data. The ultimate goal of the project is to develop a unified multi-level understanding of the brain and its diseases, and beyond this to emulate the computational capabilities of the brain. Reference atlases of the brain are one of the key components in this infrastructure. Based on a new generation of three-dimensional (3D) reference atlases, new solutions for analyzing and integrating brain data are being developed. HBP will build services for spatial query and analysis of brain data comparable to current online services for geospatial data. The services will provide interactive access to a wide range of data types that have information about anatomical location tied to them. The 3D volumetric nature of the brain, however, introduces a new level of complexity that requires a range of tools for making use of and interacting with the atlases. With such new tools, neuroscience research groups will be able to connect their data to atlas space, share their data through online data systems, and search and find other relevant data through the same systems. This new approach partly replaces earlier attempts to organize research data based only on a set of semantic terminologies describing the brain and its subdivisions. Copyright © 2018 The Authors. Published by Elsevier Masson SAS.. All rights reserved.

  11. Integration of genomic and medical data into a 3D atlas of human anatomy.

    Science.gov (United States)

    Turinsky, Andrei L; Fanea, Elena; Trinh, Quang; Dong, Xiaoli; Stromer, Julie N; Shu, Xueling; Wat, Stephen; Hallgrímsson, Benedikt; Hill, Jonathan W; Edwards, Carol; Grosenick, Brenda; Yajima, Masumi; Sensen, Christoph W

    2008-01-01

    We have developed a framework for the visual integration and exploration of multi-scale biomedical data, which includes anatomical and molecular components. We have also created a Java-based software system that integrates molecular information, such as gene expression data, into a three-dimensional digital atlas of the male adult human anatomy. Our atlas is structured according to the Terminologia Anatomica. The underlying data-indexing mechanism uses open standards and semantic ontology-processing tools to establish the associations between heterogeneous data types. The software system makes an extensive use of virtual reality visualization.

  12. Beam tests of an integrated prototype of the ATLAS Forward Proton detector

    CERN Document Server

    INSPIRE-00397348

    2016-09-19

    The ATLAS Forward Proton (AFP) detector is intended to measure protons scattered at small angles from the ATLAS interaction point. To this end, a combination of 3D Silicon pixel tracking modules and Quartz-Cherenkov time-of-flight (ToF) detectors is installed 210m away from the interaction point at both sides of ATLAS. Beam tests with an AFP prototype detector combining tracking and timing sub-detectors and a common readout have been performed at the CERN-SPS test-beam facility in November 2014 and September 2015 to complete the system integration and to study the detector performance. The successful tracking-timing integration was demonstrated. Good tracker hit efficiencies above 99.9% at a sensor tilt of 14{\\deg}, as foreseen for AFP, were observed. Spatial resolutions in the short pixel direction with 50 {\\mu}m pitch of 5.5 +/- 0.5 {\\mu}m per pixel plane and of 2.8 +/- 0.5 {\\mu}m for the full four-plane tracker at 14{\\deg} were found, largely surpassing the AFP requirement of 10 {\\mu}m. The timing detector...

  13. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  14. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  15. ATLAS Cloud R&D

    Science.gov (United States)

    Panitkin, Sergey; Barreiro Megino, Fernando; Caballero Bejar, Jose; Benjamin, Doug; Di Girolamo, Alessandro; Gable, Ian; Hendrix, Val; Hover, John; Kucharczyk, Katarzyna; Medrano Llamas, Ramon; Love, Peter; Ohman, Henrik; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Walker, Rodney; Zaytsev, Alexander; Atlas Collaboration

    2014-06-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  16. In-situ observation of dislocation and analysis of residual stresses by FEM/DDM modeling in water cavitation peening of pure titanium

    International Nuclear Information System (INIS)

    Ju, D Y; Han, B

    2015-01-01

    In this paper, in order to approach this problem, specimens of pure titanium were treated with WCP, and the subsequent changes in microstructure, residual stress, and surface morphologies were investigated as a function of WCP duration. The influence of water cavitation peening (WCP) treatment on the microstructure of pure titanium was investigated. A novel combined finite element and dislocation density method (FEM/DDM), proposed for predicting macro and micro residual stresses induced on the material subsurface treated with water cavitation peening, is also presented. A bilinear elastic-plastic finite element method was conducted to predict macro-residual stresses and a dislocation density method was conducted to predict micro-residual stresses. These approaches made possible the prediction of the magnitude and depth of residual stress fields in pure titanium. The effect of applied impact pressures on the residual stresses was also presented. The results of the FEM/DDM modeling were in good agreement with those of the experimental measurements. (paper)

  17. Integration of Titan supercomputer at OLCF with ATLAS Production System

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  18. Tests and final integration of the ATLAS semiconductor tracker

    CERN Document Server

    Mikulec, Bettina

    2005-01-01

    The Silicon Tracker (SCT) is part of the Inner Detector at the ATLAS experiment at CERN. Its basic building blocks are 5 different types of silicon strip modules. In total more than 15000 p-on-n single-sided silicon strip sensors of an area of about 61 m2 were used to produce 4088 SCT modules. An overall module production yield of 92% could be achieved, where the silicon modules comply with the tight electrical, thermal and mechanical specifications. The macro-assembly of 2112 barrel modules to the four barrel support cylinders was successfully carried out. The nine disks of one endcap are fully populated with 988 modules, and for the second endcap more than 50% of the modules are already mounted. Test results operating complete barrels will be presented as well as a description of the test setup. The different integration steps of the SCT with the surrounding Transition Radiation Tracker (TRT) will be explained. The installation of SCT and TRT into the ATLAS pit will happen during 2006.

  19. Upgrade and integration of the configuration and monitoring tools for the ATLAS Online farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Darlea, G L; Dumitru, I; Scannicchio, DA; Twomey, M S; Valsan, M L; Zaytsev, A

    2012-01-01

    The ATLAS Online farm is a non-homogeneous cluster of nearly 3000 PCs which run the data acquisition, trigger and control of the ATLAS detector. The systems are configured and monitored by a combination of open-source tools, such as Quattor and Nagios, and tools developed in-house, such as ConfDB. We report on the ongoing introduction of new provisioning and configuration tools, Puppet and ConfDB v2 which are more flexible and allow automation for previously uncovered needs, and on the upgrade and integration of the monitoring and alerting tools, including the interfacing of these with the TDAQ Shifter Assistant software and their integration with configuration tools. We discuss the selection of the tools and the assessment of their functionality and performance, and how they enabled the introduction of virtualization for selected services.

  20. Upgrade and integration of the configuration and monitoring tools for the ATLAS Online farm

    International Nuclear Information System (INIS)

    Ballestrero, S; Darlea, G–L; Twomey, M S; Brasolin, F; Dumitru, I; Valsan, M L; Scannicchio, D A; Zaytsev, A

    2012-01-01

    The ATLAS Online farm is a non-homogeneous cluster of nearly 3000 systems which run the data acquisition, trigger and control of the ATLAS detector. The systems are configured and monitored by a combination of open-source tools, such as Quattor and Nagios, and tools developed in-house, such as ConfDB. We report on the ongoing introduction of new provisioning and configuration tools, Puppet and ConfDB v2, which are more flexible and allow automation for previously uncovered needs, and on the upgrade and integration of the monitoring and alerting tools, including the interfacing of these with the TDAQ Shifter Assistant software and their integration with configuration tools. We discuss the selection of the tools and the assessment of their functionality and performance, and how they enabled the introduction of virtualization for selected services.

  1. SLID-ICV Vertical Integration Technology for the ATLAS Pixel Upgrades

    CERN Document Server

    INSPIRE-00219560; Moser, H.G.; Nisius, R.; Richter, R.H.; Weigell, P.

    We present the results of the characterization of pixel modules composed of 75 μm thick n-in-p sensors and ATLAS FE-I3 chips, interconnected with the SLID (Solid Liquid Inter-Diffusion) technology. This technique, developed at Fraunhofer-EMFT, is explored as an alternative to the bump-bonding process. These modules have been designed to demonstrate the feasibility of a very compact detector to be employed in the future ATLAS pixel upgrades, making use of vertical integration technologies. This module concept also envisages Inter-Chip-Vias (ICV) to extract the signals from the backside of the chips, thereby achieving a higher fraction of active area with respect to the present pixel module design. In the case of the demonstrator module, ICVs are etched over the original wire bonding pads of the FE-I3 chip. In the modules with ICVs the FE-I3 chips will be thinned down to 50 um. The status of the ICV preparation is presented.

  2. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, A; The ATLAS collaboration; Klimentov, A; Senchenko, A

    2012-01-01

    The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  3. ATLAS Cloud R&D

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Love, P; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  4. The integration and engineering of the ATLAS SemiConductor Tracker Barrel

    Energy Technology Data Exchange (ETDEWEB)

    Abdesselam, A; Barr, A J [Department of Physics, Oxford University, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Allport, P P; Austin, N [Oliver Lodge Laboratory, University of Liverpool, P.O. Box 147, Oxford Street, Liverpool L69 3BX (United Kingdom); Anastopoulos, C [University of Sheffield, Department of Physics and Astronomy, Hounsfield Road, Sheffield S3 7RH (United Kingdom); Anderson, B; Attree, D J [Department of Physics and Astronomy, University College London (United Kingdom); Andricek, L; Bangert, A [Max-Planck-Institut fuer Physik, (Werner-Heisenberg-Institut), Foehringer Ring 6, 80805 Muenchen (Germany); Anghinolfi, F [CERN, CH - 1211 Geneva 23 (Switzerland); Apsimon, R; Barclay, P; Batchelor, L E [Rutherford Appleton Laboratory, Science and Technology Facilities Council, Harwell Science and Innovation Campus, Didcot OX11 0QX (United Kingdom); Atkinson, T [School of Physics, University of Melbourne, Parkville, Victoria 3010 (Australia); Barbier, G [Universite de Geneve, Section de Physique, 24 rue Ernest Ansermet, CH - 1211 Geneve 4 (Switzerland); Bates, R L; Bell, W H [University of Glasgow, Department of Physics and Astronomy, Glasgow G12 8QQ (United Kingdom); Batley, J R [Cavendish Laboratory, University of Cambridge, J J Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Beck, G A [Department of Physics, Queen Mary, University of London, Mile End Road, London E1 4NS (United Kingdom); Bell, P J [School of Physics and Astronomy, University of Manchester, Manchester M13 9PL (United Kingdom)] (and others)

    2008-10-15

    The ATLAS SemiConductor Tracker (SCT) was built in three sections: a barrel and two end-caps. This paper describes the design, construction and final integration of the barrel section. The barrel is constructed around four nested cylinders that provide a stable and accurate support structure for the 2112 silicon modules and their associated services. The emphasis of this paper is directed at the aspects of engineering design that turned a concept into a fully-functioning detector, as well as the integration and testing of large sub-sections of the final SCT barrel detector. The paper follows the chronology of the construction. The main steps of the assembly are described with the results of intermediate tests. The barrel service components were developed and fabricated in parallel so that a flow of detector modules, cooling loops, opto-harnesses and Frequency-Scanning-Interferometry (FSI) alignment structures could be assembled onto the four cylinders. Once finished, each cylinder was conveyed to the next site for the mounting of modules to form a complete single barrel. Extensive electrical and thermal function tests were carried out on the completed single barrels. In the next stage, the four single barrels and thermal enclosures were combined into the complete SCT barrel detector so that it could be integrated with the Transition Radiation Tracker (TRT) barrel to form the central part of the ATLAS inner detector. Finally, the completed SCT barrel was tested together with the TRT barrel in noise tests and using cosmic rays.

  5. Integration of ROOT Notebooks as an ATLAS analysis web-based tool in outreach and public data release

    CERN Document Server

    Sanchez, Arturo; The ATLAS collaboration

    2016-01-01

    The integration of the ROOT data analysis framework with the Jupyter Notebook technology presents an incredible potential in the enhance and expansion of educational and training programs: starting from university students in their early years, passing to new ATLAS PhD students and post doctoral researchers, to those senior analysers and professors that want to restart their contact with the analysis of data or to include a more friendly but yet very powerful open source tool in the classroom. Such tools have been already tested in several environments and a fully web-based integration together with Open Access Data repositories brings the possibility to go a step forward in the search of ATLAS for integration between several CERN projects in the field of the education and training, developing new computing solutions on the way.

  6. ATLAS ITk short-strip stave prototype module with integrated DCDC powering and control

    CERN Document Server

    AUTHOR|(SzGeCERN)397167; The ATLAS collaboration

    2017-01-01

    During the Phase II upgrade, the ATLAS detector at the LHC will be upgraded with a new Inner Tracker (ITk) detector. The ITk prototype barrel module design has adopted an integrated low mass assembly featuring single-sided flexible circuits, with readout ASICs, glued to the silicon strip sensor. Further integration has been achieved by the attachment of module DCDC powering, a HV sensor biasing switch and autonomous monitoring and control to the sensor. This low mass integrated module approach benefits further in a reduced width stave structure to which the modules are attached. The results of preliminary electrical tests of such an integrated module are presented.

  7. ATLAS cloud R and D

    International Nuclear Information System (INIS)

    Panitkin, Sergey; Bejar, Jose Caballero; Hover, John; Zaytsev, Alexander; Megino, Fernando Barreiro; Girolamo, Alessandro Di; Kucharczyk, Katarzyna; Llamas, Ramon Medrano; Benjamin, Doug; Gable, Ian; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Hendrix, Val; Love, Peter; Ohman, Henrik; Walker, Rodney

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R and D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R and D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R and D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R and D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  8. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, Alexey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-01-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  9. A study of dynamic data placement for ATLAS distributed data management

    CERN Document Server

    Beermann, Thomas Alfons; The ATLAS collaboration; Maettig, Peter

    2015-01-01

    This contribution presents a study on the applicability and usefulness of dynamic data placement methods for data-intensive systems, such as ATLAS distributed data management (DDM). In this system the jobs are sent to the data, therefore having a good distribution of data is significant. Ways of forecasting workload patterns are examined which then are used to redistribute data to achieve a better overall utilisation of computing resources and to reduce waiting time for jobs before they can run on the grid. This method is based on a tracer infrastructure that is able to monitor and store historical data accesses and which is used to create popularity reports. These reports provide detailed summaries about data accesses in the past, including information about the accessed files, the involved users and the sites. From this past data it is possible to then make near-term forecasts for data popularity in the future. This study evaluates simple prediction methods as well as more complex methods like neural networ...

  10. An integrated expression atlas of miRNAs and their promoters in human and mouse

    DEFF Research Database (Denmark)

    de Rie, Derek; Abugessaisa, Imad; Alam, Tanvir

    2017-01-01

    MicroRNAs (miRNAs) are short non-coding RNAs with key roles in cellular regulation. As part of the fifth edition of the Functional Annotation of Mammalian Genome (FANTOM5) project, we created an integrated expression atlas of miRNAs and their promoters by deep-sequencing 492 short RNA (sRNA) libr...

  11. AGIS: The ATLAS Grid Information System

    OpenAIRE

    Anisenkov, Alexey; Belov, Sergey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-01-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configurat...

  12. AGIS: The ATLAS Grid Information System

    Science.gov (United States)

    Anisenkov, A.; Di Girolamo, A.; Klimentov, A.; Oleynik, D.; Petrosyan, A.; Atlas Collaboration

    2014-06-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produced petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we describe the ATLAS Grid Information System (AGIS), designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  13. Study of an automatic readout integrated circuit for the signal shaping of the ATLAS electromagnetic calorimeter; Etude d`un circuit integre de commutation automatique de gain pour le circuit de mise en forme du signal du calorimetre electromagnetique d`ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Bussat, J.M. [Laboratoire d`Annecy-le-Vieux de Physique des Particules, 74 - Annecy-le-Vieux (France)

    1996-12-01

    This paper describes the present state of the development of an automatic readout integrated circuit that can be used, connected to the four gain shaper of LAL, at the ATLAS electromagnetic calorimeter.

  14. ATLAS Cloud Computing R&D project

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2013-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  15. Luminosity Monitoring in ATLAS with MPX Detectors

    CERN Document Server

    AUTHOR|(CDS)2086061

    2013-01-01

    The ATLAS-MPX detectors are based on the Medipix2 silicon devices designed by CERN for the detection of multiple types of radiation. Sixteen such detectors were successfully operated in the ATLAS detector at the LHC and collected data independently of the ATLAS data-recording chain from 2008 to 2013. Each ATLAS-MPX detector provides separate measurements of the bunch-integrated LHC luminosity. An internal consistency for luminosity monitoring of about 2% was demonstrated. In addition, the MPX devices close to the beam are sensitive enough to provide relative-luminosity measurements during van der Meer calibration scans, in a low-luminosity regime that lies below the sensitivity of the ATLAS calorimeter-based bunch-integrating luminometers. Preliminary results from these luminosity studies are presented for 2012 data taken at $\\sqrt{s}=8$ TeV proton-proton collisions.

  16. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, A; The ATLAS collaboration; Klimentov, A; Oleynik, D; Petrosyan, A

    2014-01-01

    In this paper we describe ATLAS Grid Information System (AGIS), the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  17. AGIS: The ATLAS Grid Information System

    OpenAIRE

    Anisenkov, A; Di Girolamo, A; Klimentov, A; Oleynik, D; Petrosyan, A

    2013-01-01

    In this paper we describe ATLAS Grid Information System (AGIS), the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  18. Population-averaged macaque brain atlas with high-resolution ex vivo DTI integrated into in vivo space.

    Science.gov (United States)

    Feng, Lei; Jeon, Tina; Yu, Qiaowen; Ouyang, Minhui; Peng, Qinmu; Mishra, Virendra; Pletikos, Mihovil; Sestan, Nenad; Miller, Michael I; Mori, Susumu; Hsiao, Steven; Liu, Shuwei; Huang, Hao

    2017-12-01

    Animal models of the rhesus macaque (Macaca mulatta), the most widely used nonhuman primate, have been irreplaceable in neurobiological studies. However, a population-averaged macaque brain diffusion tensor imaging (DTI) atlas, including comprehensive gray and white matter labeling as well as bony and facial landmarks guiding invasive experimental procedures, is not available. The macaque white matter tract pathways and microstructures have been rarely recorded. Here, we established a population-averaged macaque brain atlas with high-resolution ex vivo DTI integrated into in vivo space incorporating bony and facial landmarks, and delineated microstructures and three-dimensional pathways of major white matter tracts in vivo MRI/DTI and ex vivo (postmortem) DTI of ten rhesus macaque brains were acquired. Single-subject macaque brain DTI template was obtained by transforming the postmortem high-resolution DTI data into in vivo space. Ex vivo DTI of ten macaque brains was then averaged in the in vivo single-subject template space to generate population-averaged macaque brain DTI atlas. The white matter tracts were traced with DTI-based tractography. One hundred and eighteen neural structures including all cortical gyri, white matter tracts and subcortical nuclei, were labeled manually on population-averaged DTI-derived maps. The in vivo microstructural metrics of fractional anisotropy, axial, radial and mean diffusivity of the traced white matter tracts were measured. Population-averaged digital atlas integrated into in vivo space can be used to label the experimental macaque brain automatically. Bony and facial landmarks will be available for guiding invasive procedures. The DTI metric measurements offer unique insights into heterogeneous microstructural profiles of different white matter tracts.

  19. System Description of the Electrical Power Supply System for the ATLAS Integral Test Loop

    International Nuclear Information System (INIS)

    Moon, S. K.; Park, J. K.; Kim, Y. S.; Song, C. H.; Baek, W. P.

    2007-02-01

    An integral effect test loop for pressurized water reactors (PWRs), the ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), is constructed by Thermal-Hydraulics Safety Research Team in Korea Atomic Energy Research Institute (KAERI). The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400. This report describes the design and technical specifications of the electrical power supply system which supplies the electrical powers to core heater rods, other heaters, various pumps and other systems. The electrical power supply system had acquired the final approval on the operation from the Korea Electrical Safety Corporation. During performance tests for the operation and control, the electrical power supply system showed completely acceptable operation and control performance

  20. Off-line commissioning of EBIS and plans for its integration into ATLAS and CARIBU

    Energy Technology Data Exchange (ETDEWEB)

    Ostroumov, P. N., E-mail: ostroumov@anl.gov; Barcikowski, A.; Dickerson, C. A.; Mustapha, B.; Perry, A.; Sharamentov, S. I.; Vondrasek, R. C.; Zinkann, G. [Argonne National Laboratory, Argonne, Illinois 60439 (United States)

    2016-02-15

    An Electron Beam Ion Source Charge Breeder (EBIS-CB) has been developed at Argonne to breed radioactive beams from the CAlifornium Rare Isotope Breeder Upgrade (CARIBU) facility at Argonne Tandem Linac Accelerator System (ATLAS). The EBIS-CB will replace the existing ECR charge breeder to increase the intensity and significantly improve the purity of reaccelerated radioactive ion beams. The CARIBU EBIS-CB has been successfully commissioned offline with an external singly charged cesium ion source. The performance of the EBIS fully meets the specifications to breed rare isotope beams delivered from CARIBU. The EBIS is being relocated and integrated into ATLAS and CARIBU. A long electrostatic beam transport system including two 180° bends in the vertical plane has been designed. The commissioning of the EBIS and the beam transport system in their permanent location will start at the end of this year.

  1. Off-line commissioning of EBIS and plans for its integration into ATLAS and CARIBU

    Science.gov (United States)

    Ostroumov, P. N.; Barcikowski, A.; Dickerson, C. A.; Mustapha, B.; Perry, A.; Sharamentov, S. I.; Vondrasek, R. C.; Zinkann, G.

    2016-02-01

    An Electron Beam Ion Source Charge Breeder (EBIS-CB) has been developed at Argonne to breed radioactive beams from the CAlifornium Rare Isotope Breeder Upgrade (CARIBU) facility at Argonne Tandem Linac Accelerator System (ATLAS). The EBIS-CB will replace the existing ECR charge breeder to increase the intensity and significantly improve the purity of reaccelerated radioactive ion beams. The CARIBU EBIS-CB has been successfully commissioned offline with an external singly charged cesium ion source. The performance of the EBIS fully meets the specifications to breed rare isotope beams delivered from CARIBU. The EBIS is being relocated and integrated into ATLAS and CARIBU. A long electrostatic beam transport system including two 180° bends in the vertical plane has been designed. The commissioning of the EBIS and the beam transport system in their permanent location will start at the end of this year.

  2. Experience with Intel's many integrated core architecture in ATLAS software

    International Nuclear Information System (INIS)

    Fleischmann, S; Neumann, M; Kama, S; Lavrijsen, W; Vitillo, R

    2014-01-01

    Intel recently released the first commercial boards of its Many Integrated Core (MIC) Architecture. MIC is Intel's solution for the domain of throughput computing, currently dominated by general purpose programming on graphics processors (GPGPU). MIC allows the use of the more familiar x86 programming model and supports standard technologies such as OpenMP, MPI, and Intel's Threading Building Blocks (TBB). This should make it possible to develop for both throughput and latency devices using a single code base. In ATLAS Software, track reconstruction has been shown to be a good candidate for throughput computing on GPGPU devices. In addition, the newly proposed offline parallel event-processing framework, GaudiHive, uses TBB for task scheduling. The MIC is thus, in principle, a good fit for this domain. In this paper, we report our experiences of porting to and optimizing ATLAS tracking algorithms for the MIC, comparing the programmability and relative cost/performance of the MIC against those of current GPGPUs and latency-optimized CPUs.

  3. Development, validation and integration of the ATLAS Trigger System software in Run 2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00377077; The ATLAS collaboration

    2017-01-01

    The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware, and software, associated to various sub-detectors that must seamlessly cooperate in order to select one collision of interest out of every 40,000 delivered by the LHC every millisecond. These proceedings discuss the challenges, organization and work flow of the ongoing trigger software development, validation, and deployment. The goal of this development is to ensure that the most up-to-date algorithms are used to optimize the performance of the experiment. The goal of the validation is to ensure the reliability and predictability of the software performance. Integration tests are carried out to ensure that the software deployed to the online trigger farm during data-taking run as desired. Trigger software is validated by emulating online conditions using a benchmark run and mimicking the reconstruction that occurs during normal data-taking. This exercise is computationally demanding and thus runs on the ATLAS high per...

  4. Development, Validation and Integration of the ATLAS Trigger System Software in Run 2

    Science.gov (United States)

    Keyes, Robert; ATLAS Collaboration

    2017-10-01

    The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware, and software, associated to various sub-detectors that must seamlessly cooperate in order to select one collision of interest out of every 40,000 delivered by the LHC every millisecond. These proceedings discuss the challenges, organization and work flow of the ongoing trigger software development, validation, and deployment. The goal of this development is to ensure that the most up-to-date algorithms are used to optimize the performance of the experiment. The goal of the validation is to ensure the reliability and predictability of the software performance. Integration tests are carried out to ensure that the software deployed to the online trigger farm during data-taking run as desired. Trigger software is validated by emulating online conditions using a benchmark run and mimicking the reconstruction that occurs during normal data-taking. This exercise is computationally demanding and thus runs on the ATLAS high performance computing grid with high priority. Performance metrics ranging from low-level memory and CPU requirements, to distributions and efficiencies of high-level physics quantities are visualized and validated by a range of experts. This is a multifaceted critical task that ties together many aspects of the experimental effort and thus directly influences the overall performance of the ATLAS experiment.

  5. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  6. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; De, K; Oleynik, D; Jha, S; Wells, J

    2016-01-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  7. Luminosity Measurements with the ATLAS Detector

    CERN Document Server

    Maettig, Stefan; Pauly, T

    For almost all measurements performed at the Large Hadron Collider (LHC) one crucial ingredient is the precise knowledge about the integrated luminosity. The determination and precision on the integrated luminosity has direct implications on any cross-section measurement, and its instantaneous measurement gives important feedback on the conditions at the experimental insertions and on the accelerator performance. ATLAS is one of the main experiments at the LHC. In order to provide an accurate and reliable luminosity determination, ATLAS uses a variety of different sub-detectors and algorithms that measure the luminosity simultaneously. One of these sub-detectors are the Beam Condition Monitors (BCM) that were designed to protect the ATLAS detector from potentially dangerous beam losses. Due to its fast readout and very clean signals this diamond detector is providing in addition since May 2011 the official ATLAS luminosity. This thesis describes the calibration and performance of the BCM as a luminosity detec...

  8. Radiation damage monitoring in the ATLAS pixel detector

    International Nuclear Information System (INIS)

    Seidel, Sally

    2013-01-01

    We describe the implementation of radiation damage monitoring using measurement of leakage current in the ATLAS silicon pixel sensors. The dependence of the leakage current upon the integrated luminosity is presented. The measurement of the radiation damage corresponding to an integrated luminosity 5.6 fb −1 is presented along with a comparison to a model. -- Highlights: ► Radiation damage monitoring via silicon leakage current is implemented in the ATLAS (LHC) pixel detector. ► Leakage currents measured are consistent with the Hamburg/Dortmund model. ► This information can be used to validate the ATLAS simulation model.

  9. The ATLAS Liquid Argon Calorimeters: integration, installation and commissioning

    International Nuclear Information System (INIS)

    Tikhonov, Yu.

    2008-01-01

    The ATLAS liquid argon calorimeter system consists of an electromagnetic barrel calorimeter and two end-caps with electromagnetic, hadronic and forward calorimeters positioned in three cryostats. Since May 2006 the LAr barrel calorimeter records regular calibration runs and takes cosmic muon data together with tile hadronic calorimeter in the ATLAS cavern. The cosmic runs with end-cap calorimeters started in April 2007. First results of these combined runs are presented

  10. The Detector Control System of the ATLAS SemiCondutor Tracker during Macro-Assembly and Integration

    CERN Document Server

    Abdesselam, A; Basiladze, S; Bates, R L; Bell, P; Bingefors, N; Böhm, J; Brenner, R; Chamizo-Llatas, M; Clark, A; Codispoti, G; Colijn, A P; D'Auria, S; Dorholt, O; Doherty, F; Ferrari, P; Ferrère, D; Górnicki, E; Koperny, S; Lefèvre, R; Lindquist, L-E; Malecki, P; Mikulec, B; Mohn, B; Pater, J; Pernegger, H; Phillips, P; Robichaud-Véronneau, A; Robinson, D; Roe, S; Sandaker, H; Sfyrla, A; Stanecka, E; Stastny, J; Viehhauser, G; Vossebeld, J; Wells, P

    2008-01-01

    The ATLAS SemiConductor Tracker (SCT) is one of the largest existing semiconductor detectors. It is situated between the Pixel detector and the Transition Radiation Tracker at one of the four interaction points of the Large Hadron Collider (LHC). During 2006-2007 the detector was lowered into the ATLAS cavern and installed in its final position. For the assembly, integration and commissioning phase, a complete Detector Control System (DCS) was developed to ensure the safe operation of the tracker. This included control of the individual powering of the silicon modules, a bi-phase cooling system and various types of sensors monitoring the SCT environment and the surrounding test enclosure. The DCS software architecture, performance and operational experience will be presented in the view of a validation of the DCS for the final SCT installation and operation phase.

  11. AGIS: Evolution of Distributed Computing Information system for ATLAS

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria; Karavakis, Edward

    2015-01-01

    The variety of the ATLAS Computing Infrastructure requires a central information system to define the topology of computing resources and to store the different parameters and configuration data which are needed by the various ATLAS software components. The ATLAS Grid Information System is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  12. gLExec Integration with the ATLAS PanDA Workload Management System

    CERN Document Server

    Edward Karavakis; The ATLAS collaboration; Campana, Simone; De, Kaushik; Di Girolamo, Alessandro; Maarten Litmaath; Maeno, Tadashi; Medrano Llamas, Ramon; Nilsson, Paul; Wenaus, Torre

    2015-01-01

    The ATLAS Experiment at the Large Hadron Collider has collected data during Run 1 and is ready to collect data in Run 2. The ATLAS data are distributed, processed and analysed at more than 130 grid and cloud sites across the world. At any given time, there are more than 150,000 concurrent jobs running and about a million jobs are submitted on a daily basis on behalf of thousands of physicists within the ATLAS collaboration. The Production and Distributed Analysis (PanDA) workload management system has proved to be a key component of ATLAS and plays a crucial role in the success of the large-scale distributed computing as it is the sole system for distributed processing of Grid jobs across the collaboration since October 2007. ATLAS user jobs are executed on worker nodes by pilots sent to the sites by pilot factories. This pilot architecture has greatly improved job reliability and although it has clear advantages, such as making the working environment homogeneous by hiding any potential heterogeneities, the ...

  13. The ATLAS IBL CO2 Cooling System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00237783; The ATLAS collaboration; Zwalinski, L.; Bortolin, C.; Vogt, S.; Godlewski, J.; Crespo-Lopez, O.; Van Overbeek, M.; Blaszcyk, T.

    2017-01-01

    The ATLAS Pixel detector has been equipped with an extra B-layer in the space obtained by a reduced beam pipe. This new pixel detector called the ATLAS Insertable B-Layer (IBL) is installed in 2014 and is operational in the current ATLAS data taking. The IBL detector is cooled with evaporative CO2 and is the first of its kind in ATLAS. The ATLAS IBL CO2 cooling system is designed for lower temperature operation (<-35⁰C) than the previous developed CO2 cooling systems in High Energy Physics experiments. The cold temperatures are required to protect the pixel sensors for the high expected radiation dose up to 550 fb^-1 integrated luminosity.

  14. A Popularity Based Prediction and Data Redistribution Tool for ATLAS Distributed Data Management

    CERN Document Server

    Beermann, T; The ATLAS collaboration; Maettig, P

    2014-01-01

    This paper presents a system to predict future data popularity for data-intensive systems, such as ATLAS distributed data management (DDM). Using these predictions it is possible to make a better distribution of data, helping to reduce the waiting time for jobs using with this data. This system is based on a tracer infrastructure that is able to monitor and store historical data accesses and which is used to create popularity reports. These reports provide detailed summaries about data accesses in the past, including information about the accessed files, the involved users and the sites. From this past data it is possible to then make near-term forecasts for data popularity in the future. The prediction system introduced in this paper makes use of both simple prediction methods as well as predictions made by neural networks. The best prediction method is dependent on the type of data and the data is carefully filtered for use in either system. The second part of the paper introduces a system that effectively ...

  15. Clean tracks for ATLAS

    CERN Multimedia

    2006-01-01

    First cosmic ray tracks in the integrated ATLAS barrel SCT and TRT tracking detectors. A snap-shot of a cosmic ray event seen in the different layers of both the SCT and TRT detectors. The ATLAS Inner Detector Integration Team celebrated a major success recently, when clean tracks of cosmic rays were detected in the completed semiconductor tracker (SCT) and transition radiation tracker (TRT) barrels. These tracking tests come just months after the successful insertion of the SCT into the TRT (See Bulletin 09/2006). The cosmic ray test is important for the experiment because, after 15 years of hard work, it is the last test performed on the fully assembled barrel before lowering it into the ATLAS cavern. The two trackers work together to provide millions of channels so that particles' tracks can be identified and measured with great accuracy. According to the team, the preliminary results were very encouraging. After first checks of noise levels in the final detectors, a critical goal was to study their re...

  16. The ATLAS detector simulation application

    International Nuclear Information System (INIS)

    Rimoldi, A.

    2007-01-01

    The simulation program for the ATLAS experiment at CERN is currently in a full operational mode and integrated into the ATLAS common analysis framework, Athena. The OO approach, based on GEANT4, has been interfaced within Athena and to GEANT4 using the LCG dictionaries and Python scripting. The robustness of the application was proved during the test productions since 2004. The Python interface has added the flexibility, modularity and interactivity that the simulation tool requires in order to be able to provide a common implementation of different full ATLAS simulation setups, test beams and cosmic ray applications. Generation, simulation and digitization steps were exercised for performance and robustness tests. The comparison with real data has been possible in the context of the ATLAS Combined Test Beam (2004-2005) and cosmic ray studies (2006)

  17. Integrated System for Performance Monitoring of ATLAS TDAQ Network

    CERN Document Server

    Savu, D; The ATLAS collaboration; Martin, B; Sjoen, R; Batraneanu, S; Stancu, S

    2010-01-01

    The ATLAS TDAQ Network consists of three separate networks spanning four levels of the experimental building. Over 200 edge switches and 5 multi-blade chassis routers are used to interconnect 2000 processors, adding up to more than 7000 high speed interfaces. In order to substantially speed-up ad-hoc and post mortem analysis, a scalable, yet flexible, integrated system for monitoring both network statistics and environmental conditions, processor parameters and data taking characteristics was required. For successful up-to-the-minute monitoring, information from many SNMP compliant devices, independent databases and custom APIs was gathered, stored and displayed in an optimal way. Easy navigation and compact aggregation of multiple data sources were the main requirements; characteristics not found in any of the tested products, either open-source or commercial. This paper describes how performance, scalability and display issues were addressed and what challenges the project faced during development and deplo...

  18. Neuroinformatics of the Allen Mouse Brain Connectivity Atlas.

    Science.gov (United States)

    Kuan, Leonard; Li, Yang; Lau, Chris; Feng, David; Bernard, Amy; Sunkin, Susan M; Zeng, Hongkui; Dang, Chinh; Hawrylycz, Michael; Ng, Lydia

    2015-02-01

    The Allen Mouse Brain Connectivity Atlas is a mesoscale whole brain axonal projection atlas of the C57Bl/6J mouse brain. Anatomical trajectories throughout the brain were mapped into a common 3D space using a standardized platform to generate a comprehensive and quantitative database of inter-areal and cell-type-specific projections. This connectivity atlas has several desirable features, including brain-wide coverage, validated and versatile experimental techniques, a single standardized data format, a quantifiable and integrated neuroinformatics resource, and an open-access public online database (http://connectivity.brain-map.org/). Meaningful informatics data quantification and comparison is key to effective use and interpretation of connectome data. This relies on successful definition of a high fidelity atlas template and framework, mapping precision of raw data sets into the 3D reference framework, accurate signal detection and quantitative connection strength algorithms, and effective presentation in an integrated online application. Here we describe key informatics pipeline steps in the creation of the Allen Mouse Brain Connectivity Atlas and include basic application use cases. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Analysis Streamlining in ATLAS

    CERN Document Server

    Heinrich, Lukas; The ATLAS collaboration

    2018-01-01

    We present recent work within the ATLAS collaboration centrally provide tools to facilitate analysis management and highly automated container-based analysis execution in order to both enable non-experts to benefit from these best practices as well as the collaboration to track and re-execute analyses indpendently, e.g. during their review phase. Through integration with the ATLAS GLANCE system, users can request a pre-configured, but customizable version control setup, including continuous integration for automated build and testing as well as continuous Linux Container image building for software preservation purposes. As analyses typically require many individual steps, analysis workflow pipelines can then be defined using such images and the yadage workflow description language. The integration into the workflow exection service REANA allows the interactive or automated reproduction of the main analysis results by orchestrating a large number of container jobs using the Kubernetes. For long-term archival,...

  20. ATLAS looks forward to having beams!

    CERN Multimedia

    Hans von der Schmitt

    Lyn Evans, head of the LHC project at CERN, brought very good news: all problems are now solved or understood, and barring a disaster, the LHC should see beams in July 2008. The ATLAS overview week (8-12 October) showed impressively that the experiment is getting ready for beams on all fronts. Perhaps that is best seen in the recent runs with cosmic events, which are integrating all ATLAS subsystems. The integration milestone M4 ended just a month ago (see the article in the September issue of ATLAS e-news), exercising for one week the complete chain from detectors - trigger and data acquisition - reconstruction at Tier0 - shipment of data worldwide to Tier1s. Event displays and histograms, available both online and offline, were shown throughout the overview week and are proof that the entire chain is actually working. The integration milestones give an enormous boost to the experiment - next time during M5 end of October. During the week we learned about successes and remaining issues along this ent...

  1. ATLAS DAQ/HLT rack DCS

    International Nuclear Information System (INIS)

    Ermoline, Yuri; Burckhart, Helfried; Francis, David; Wickens, Frederick J.

    2007-01-01

    The ATLAS Detector Control System (DCS) group provides a set of standard tools, used by subsystems to implement their local control systems. The ATLAS Data Acquisition and High Level Trigger (DAQ/HLT) rack DCS provides monitoring of the environmental parameters (air temperatures, humidity, etc.). The DAQ/HLT racks are located in the underground counting room (20 racks) and in the surface building (100 racks). The rack DCS is based on standard ATLAS tools and integrated into overall operation of the experiment. The implementation is based on the commercial control package and additional components, developed by CERN Joint Controls Project Framework. The prototype implementation and measurements are presented

  2. The Latest from ATLAS

    CERN Multimedia

    2009-01-01

    Since November 2008, ATLAS has undertaken detailed maintenance, consolidation and repair work on the detector (see Bulletin of 20 July 2009). Today, the fraction of the detector that is operational has increased compared to last year: less than 1% of dead channels for most of the sub-systems. "We are going to start taking data this year with a detector which is even more efficient than it was last year," agrees ATLAS Spokesperson, Fabiola Gianotti. By mid-September the detector was fully closed again, and the cavern sealed. The magnet system has been operated at nominal current for extensive periods over recent months. Once the cavern was sealed, ATLAS began two weeks of combined running. Right now, subsystems are joining the run incrementally until the point where the whole detector is integrated and running as one. In the words of ATLAS Technical Coordinator, Marzio Nessi: "Now we really start physics." In parallel, the analysis ...

  3. Instrumentation and measurement method for the ATLAS test facility

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Byong Jo; Chu, In Chul; Eu, Dong Jin; Kang, Kyong Ho; Kim, Yeon Sik; Song, Chul Hwa; Baek, Won Pil

    2007-03-15

    An integral effect test loop for pressurized water reactors (PWRs), the ATLAS is constructed by thermal-hydraulic safety research division in KAERI. The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400 which is a Korean evolution type nuclear reactors. A total 1300 instrumentations is equipped in the ATLAS test facility. In this report, the instrumentation of ATLAS test facility and related measurement methods were introduced.

  4. Integration Of PanDA Workload Management System With Supercomputers for ATLAS

    CERN Document Server

    Oleynik, Danila; The ATLAS collaboration; De, Kaushik; Wenaus, Torre; Maeno, Tadashi; Barreiro Megino, Fernando Harald; Nilsson, Paul; Guan, Wen; Panitkin, Sergey

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production ANd Distributed Analysis system) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more t...

  5. FATALIC: a fully integrated electronics readout for the ATLAS tile calorimeter at the HL-LHC

    CERN Document Server

    Angelidakis, Stylianos; The ATLAS collaboration

    2018-01-01

    The ATLAS Collaboration has started a vast program of upgrades in the context of high-luminosity LHC (HL-LHC) foreseen in 2024. The current readout electronics of every sub-detector, including the Tile Calorimeter (TileCal), must be upgraded to comply with the extreme HL-LHC operating conditions. The ASIC described in this document, named Front-end ATlAs tiLe Integrated Circuit (FATALIC), has been developed to fulfill these requirements. FATALIC is based on a $130\\,$nm CMOS technology and performs the complete processing of the signal, including amplification, shaping and digitization on a large dynamic range from $25\\,$fC to $1.2\\,$nC. The overall architecture of this current-reading ASIC is composed by current conveyors, shapers, 12-bits pipeline analog-to-digital converters operating at $40\\,$Mhz and a digital block dealing with the three gains implemented in this electronics. A dedicated channel for low current is also designed in order to be able to perform absolute calibration with radioactive cesium so...

  6. FATALIC: a fully integrated electronics readout for the ATLAS tile calorimeter at the HL-LHC

    CERN Document Server

    Angelidakis, Stylianos; The ATLAS collaboration

    2018-01-01

    The ATLAS Collaboration has started a vast program of upgrades in the context of high-luminosity LHC (HL-LHC) foreseen in 2024. The current readout electronics of every sub-detector, including the Tile Calorimeter (TileCal), must be upgraded to comply with the extreme HL-LHC operating conditions. The ASIC described in this document, named Front-end ATlAs tiLe Integrated Circuit (FATALIC), has been developed to fulfill these requirements. FATALIC is based on a $130\\,$nm CMOS technology and performs the complete processing of the signal, including amplification, shaping and digitization on a large dynamic range A dedicated channel for low current is also designed in order to perform absolute calibration with radioactive cesium source, producing a known but low signal with a typical frequency of $100\\,$Hz. In this document, the design of FATALIC is described and the measured performances as well as results of tests using beam of particles at CERN are discussed.

  7. Atlas Basemaps in Web 2.0 Epoch

    Science.gov (United States)

    Chabaniuk, V.; Dyshlyk, O.

    2016-06-01

    The authors have analyzed their experience of the production of various Electronic Atlases (EA) and Atlas Information Systems (AtIS) of so-called "classical type". These EA/AtIS have been implemented in the past decade in the Web 1.0 architecture (e.g., National Atlas of Ukraine, Atlas of radioactive contamination of Ukraine, and others). One of the main distinguishing features of these atlases was their static nature - the end user could not change the content of EA/AtIS. Base maps are very important element of any EA/AtIS. In classical type EA/AtIS they were static datasets, which consisted of two parts: the topographic data of a fixed scale and data of the administrative-territorial division of Ukraine. It is important to note that the technique of topographic data production was based on the use of direct channels of topographic entity observation (such as aerial photography) for the selected scale. Changes in the information technology of the past half-decade are characterized by the advent of the "Web 2.0 epoch". Due to this, in cartography appeared such phenomena as, for example, "neo-cartography" and various mapping platforms like OpenStreetMap. These changes have forced developers of EA/AtIS to use new atlas basemaps. Our approach is described in the article. The phenomenon of neo-cartography and/or Web 2.0 cartography are analysed by authors using previously developed Conceptual framework of EA/AtIS. This framework logically explains the cartographic phenomena relations of three formations: Web 1.0, Web 1.0x1.0 and Web 2.0. Atlas basemaps of the Web 2.0 epoch are integrated information systems. We use several ways to integrate separate atlas basemaps into the information system - by building: weak integrated information system, structured system and meta-system. This integrated information system consists of several basemaps and falls under the definition of "big data". In real projects it is already used the basemaps of three strata: Conceptual

  8. ATLAS BASEMAPS IN WEB 2.0 EPOCH

    Directory of Open Access Journals (Sweden)

    V. Chabaniuk

    2016-06-01

    Full Text Available The authors have analyzed their experience of the production of various Electronic Atlases (EA and Atlas Information Systems (AtIS of so-called "classical type". These EA/AtIS have been implemented in the past decade in the Web 1.0 architecture (e.g., National Atlas of Ukraine, Atlas of radioactive contamination of Ukraine, and others. One of the main distinguishing features of these atlases was their static nature - the end user could not change the content of EA/AtIS. Base maps are very important element of any EA/AtIS. In classical type EA/AtIS they were static datasets, which consisted of two parts: the topographic data of a fixed scale and data of the administrative-territorial division of Ukraine. It is important to note that the technique of topographic data production was based on the use of direct channels of topographic entity observation (such as aerial photography for the selected scale. Changes in the information technology of the past half-decade are characterized by the advent of the “Web 2.0 epoch”. Due to this, in cartography appeared such phenomena as, for example, "neo-cartography" and various mapping platforms like OpenStreetMap. These changes have forced developers of EA/AtIS to use new atlas basemaps. Our approach is described in the article. The phenomenon of neo-cartography and/or Web 2.0 cartography are analysed by authors using previously developed Conceptual framework of EA/AtIS. This framework logically explains the cartographic phenomena relations of three formations: Web 1.0, Web 1.0x1.0 and Web 2.0. Atlas basemaps of the Web 2.0 epoch are integrated information systems. We use several ways to integrate separate atlas basemaps into the information system – by building: weak integrated information system, structured system and meta-system. This integrated information system consists of several basemaps and falls under the definition of "big data". In real projects it is already used the basemaps of three strata

  9. ATLAS Facility Description Report

    International Nuclear Information System (INIS)

    Kang, Kyoung Ho; Moon, Sang Ki; Park, Hyun Sik; Cho, Seok; Choi, Ki Yong

    2009-04-01

    A thermal-hydraulic integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been constructed at KAERI (Korea Atomic Energy Research Institute). The ATLAS has the same two-loop features as the APR1400 and is designed according to the well-known scaling method suggested by Ishii and Kataoka to simulate the various test scenarios as realistically as possible. It is a half-height and 1/288-volume scaled test facility with respect to the APR1400. The fluid system of the ATLAS consists of a primary system, a secondary system, a safety injection system, a break simulating system, a containment simulating system, and auxiliary systems. The primary system includes a reactor vessel, two hot legs, four cold legs, a pressurizer, four reactor coolant pumps, and two steam generators. The secondary system of the ATLAS is simplified to be of a circulating loop-type. Most of the safety injection features of the APR1400 and the OPR1000 are incorporated into the safety injection system of the ATLAS. In the ATLAS test facility, about 1300 instrumentations are installed to precisely investigate the thermal-hydraulic behavior in simulation of the various test scenarios. This report describes the scaling methodology, the geometric data of the individual component, and the specification and the location of the instrumentations in detail

  10. Status of the AFP Project in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224260; The ATLAS collaboration

    2017-01-01

    Status of the AFP project in the ATLAS experiment is given. In 2016 one arm of the AFP detector was installed and first data have been taken. In parallel with integration of the AFP subdetector into the ATLAS TDAQ nad DCS, beam tests and preparations for the installation of the 2nd arm are performed.

  11. Non-local statistical label fusion for multi-atlas segmentation.

    Science.gov (United States)

    Asman, Andrew J; Landman, Bennett A

    2013-02-01

    Multi-atlas segmentation provides a general purpose, fully-automated approach for transferring spatial information from an existing dataset ("atlases") to a previously unseen context ("target") through image registration. The method to resolve voxelwise label conflicts between the registered atlases ("label fusion") has a substantial impact on segmentation quality. Ideally, statistical fusion algorithms (e.g., STAPLE) would result in accurate segmentations as they provide a framework to elegantly integrate models of rater performance. The accuracy of statistical fusion hinges upon accurately modeling the underlying process of how raters err. Despite success on human raters, current approaches inaccurately model multi-atlas behavior as they fail to seamlessly incorporate exogenous intensity information into the estimation process. As a result, locally weighted voting algorithms represent the de facto standard fusion approach in clinical applications. Moreover, regardless of the approach, fusion algorithms are generally dependent upon large atlas sets and highly accurate registration as they implicitly assume that the registered atlases form a collectively unbiased representation of the target. Herein, we propose a novel statistical fusion algorithm, Non-Local STAPLE (NLS). NLS reformulates the STAPLE framework from a non-local means perspective in order to learn what label an atlas would have observed, given perfect correspondence. Through this reformulation, NLS (1) seamlessly integrates intensity into the estimation process, (2) provides a theoretically consistent model of multi-atlas observation error, and (3) largely diminishes the need for large atlas sets and very high-quality registrations. We assess the sensitivity and optimality of the approach and demonstrate significant improvement in two empirical multi-atlas experiments. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. A Solar Atlas for Building-Integrated Photovoltaic Electricity Resource Assessment

    DEFF Research Database (Denmark)

    Möller, Bernd; Nielsen, Steffen; Sperling, Karl

    While photovoltaic energy gathers momentum as power costs increase and panel costs decrease, the total technical and economic potentials for building integrated solar energy in Denmark remain largely unidentified. The current net metering feed-in scheme is restricted to 6kW plant size, limiting...... large scale application. This paper presents a solar atlas based on a high-resolution digital elevation model (DEM) of all 2.9 million buildings in the country, combined with a building register. The 1.6 m resolution DEM has been processed into global radiation input, solar energy output and production....... The continuous assessment of solar electricity generation potentials by marginal costs, ownership and plant type presented in the paper may be used for defining long term policies for the development of photovoltaic energy, as well as political instruments such as a multi-tier feed-in tariff....

  13. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  14. Integrating a dynamic data federation into the ATLAS distributed data management system

    CERN Document Server

    Berghaus, Frank; The ATLAS collaboration

    2018-01-01

    Input data for applications that run in cloud computing centres can be stored at remote repositories, typically with multiple copies of the most popular data stored at many sites. Locating and retrieving the remote data can be challenging, and we believe that federating the storage can address this problem. In this approach, the closest copy of the data is used based on geographical or other information. Currently, we are using the dynamic data federation, Dynafed, a software solution developed by CERN IT. Dynafed supports several industry standards for connection protocols, such as Amazon S3, Microsoft Azure and HTTP with WebDAV extensions. Dynafed functions as an abstraction layer under which protocol-dependent authentication details are hidden from the user, requiring the user to only provide an X509 certificate. We have set up an instance of Dynafed and integrated it into the ATLAS distributed data management system, Rucio. We report on the challenges faced during the installation and integration.

  15. Hierarchical Control of the ATLAS Experiment

    CERN Document Server

    Barriuso-Poy, Alex; Llobet-Valero, E

    2007-01-01

    Control systems at High Energy Physics (HEP) experiments are becoming increasingly complex mainly due to the size, complexity and data volume associated to the front-end instrumentation. In particular, this becomes visible for the ATLAS experiment at the LHC accelerator at CERN. ATLAS will be the largest particle detector ever built, result of an international collaboration of more than 150 institutes. The experiment is composed of 9 different specialized sub-detectors that perform different tasks and have different requirements for operation. The system in charge of the safe and coherent operation of the whole experiment is called Detector Control System (DCS). This thesis presents the integration of the ATLAS DCS into a global control tree following the natural segmentation of the experiment into sub-detectors and smaller sub-systems. The integration of the many different systems composing the DCS includes issues such as: back-end organization, process model identification, fault detection, synchronization ...

  16. ATLAS DataFlow Infrastructure: Recent results from ATLAS cosmic and first-beam data-taking

    Energy Technology Data Exchange (ETDEWEB)

    Vandelli, Wainer, E-mail: wainer.vandelli@cern.c

    2010-04-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented test-bed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its flexibility and contributed in understanding its limitations. Moreover, the integration with the detector and the interfacing with the off-line data processing and management have been able to take advantage of this extended data taking-period as well. In this paper we report on the usage of the DataFlow infrastructure during the ATLAS data-taking. These results, backed-up by complementary performance tests, validate the architecture of the ATLAS DataFlow and prove that the system is robust, flexible and scalable enough to cope with the final requirements of the ATLAS experiment.

  17. ATLAS DataFlow Infrastructure: Recent results from ATLAS cosmic and first-beam data-taking

    International Nuclear Information System (INIS)

    Vandelli, Wainer

    2010-01-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented test-bed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its flexibility and contributed in understanding its limitations. Moreover, the integration with the detector and the interfacing with the off-line data processing and management have been able to take advantage of this extended data taking-period as well. In this paper we report on the usage of the DataFlow infrastructure during the ATLAS data-taking. These results, backed-up by complementary performance tests, validate the architecture of the ATLAS DataFlow and prove that the system is robust, flexible and scalable enough to cope with the final requirements of the ATLAS experiment.

  18. Data Federation Strategies for ATLAS using XRootD

    CERN Document Server

    Gardner, R; The ATLAS collaboration; Duckeck, G; Elmsheuser, J; Hanushevski, A; Hönig, F; Iven, J; Legger, F; Vukotic, I; Yang, W

    2014-01-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the w...

  19. Data Federation Strategies for ATLAS using XRootD

    CERN Document Server

    Gardner, R; The ATLAS collaboration; Duckeck, G; Elmsheuser, J; Hanushevski, A; Hönig, F; Iven, J; Legger, F; Vukotic, I; Yang, W

    2013-01-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the w...

  20. The new ATLAS Fast Calorimeter Simulation

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00223142; The ATLAS collaboration

    2017-01-01

    Current and future need for large scale simulated samples motivate the development of reliable fast simulation techniques. The new Fast Calorimeter Simulation is an improved parameterized response of single particles in the ATLAS calorimeter that aims to accurately emulate the key features of the detailed calorimeter response as simulated with Geant4, yet approximately ten times faster. Principal component analysis and machine learning techniques are used to improve the performance and decrease the memory need compared to the current version of the ATLAS Fast Calorimeter Simulation. A prototype of this new Fast Calorimeter Simulation is in development and its integration into the ATLAS simulation infrastructure is ongoing.

  1. The new ATLAS Fast Calorimeter Simulation

    Science.gov (United States)

    Schaarschmidt, J.; ATLAS Collaboration

    2017-10-01

    Current and future need for large scale simulated samples motivate the development of reliable fast simulation techniques. The new Fast Calorimeter Simulation is an improved parameterized response of single particles in the ATLAS calorimeter that aims to accurately emulate the key features of the detailed calorimeter response as simulated with Geant4, yet approximately ten times faster. Principal component analysis and machine learning techniques are used to improve the performance and decrease the memory need compared to the current version of the ATLAS Fast Calorimeter Simulation. A prototype of this new Fast Calorimeter Simulation is in development and its integration into the ATLAS simulation infrastructure is ongoing.

  2. Data federation strategies for ATLAS using XRootD

    Science.gov (United States)

    Gardner, Robert; Campana, Simone; Duckeck, Guenter; Elmsheuser, Johannes; Hanushevsky, Andrew; Hönig, Friedrich G.; Iven, Jan; Legger, Federica; Vukotic, Ilija; Yang, Wei; Atlas Collaboration

    2014-06-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the wide area network and staging of remote data files to local disk. To support job-brokering decisions, a time-dependent cost-of-data-access matrix is made taking into account network performance and key site performance factors. The system's response to production-scale physics analysis workloads, either from individual end-users or ATLAS analysis services, is discussed.

  3. ATLAS: Now under new management

    CERN Multimedia

    Katarina Anthony

    2013-01-01

    On 1 March, the ATLAS Collaboration welcomed a new spokesperson, Dave Charlton (University of Birmingham), and two new deputy spokespersons, Thorsten Wengler (CERN) and Beate Heinemann (University of California, Berkeley and LBNL). The Bulletin takes a look at what’s in store for one of the world’s largest scientific collaborations.   ATLAS members at the 2010 collaboration meeting in Copenhagen. Image: Rune Johansen and Troels Petersen. ATLAS spokesperson Dave Charlton has seen the collaboration through countless milestones: from construction to start-up to the 4 July 2012 announcement, he’s been an integral part of the team. Now, after twelve years with the collaboration, Dave is moving into the main office for the next two years. “2012 was a landmark year for ATLAS,” says Dave. “We spent a lot of time in the limelight and, in many ways, all eyes are still on us. But with the shutdown now under way, our focus is ...

  4. ATLAS Software Installation on Supercomputers

    CERN Document Server

    Undrus, Alexander; The ATLAS collaboration

    2018-01-01

    PowerPC and high performance computers (HPC) are important resources for computing in the ATLAS experiment. The future LHC data processing will require more resources than Grid computing, currently using approximately 100,000 cores at well over 100 sites, can provide. Supercomputers are extremely powerful as they use resources of hundreds of thousands CPUs joined together. However their architectures have different instruction sets. ATLAS binary software distributions for x86 chipsets do not fit these architectures, as emulation of these chipsets results in huge performance loss. This presentation describes the methodology of ATLAS software installation from source code on supercomputers. The installation procedure includes downloading the ATLAS code base as well as the source of about 50 external packages, such as ROOT and Geant4, followed by compilation, and rigorous unit and integration testing. The presentation reports the application of this procedure at Titan HPC and Summit PowerPC at Oak Ridge Computin...

  5. Evolution of the ATLAS Nightly Build System

    International Nuclear Information System (INIS)

    Undrus, A

    2012-01-01

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Build and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, testing, and creation of distribution kits. The ATN testing framework of the Nightly System runs unit and integration tests in parallel suites, fully utilizing the resources of multi-core machines, and provides the first results even before compilations complete. The NICOS error detection system is based on several techniques and classifies the compilation and test errors according to their severity. It is periodically tuned to place greater emphasis on certain software defects by highlighting the problems on NICOS web pages and sending automatic e-mail notifications to responsible developers. These and other recent developments will be presented and future plans will be described.

  6. Second ATLAS Domestic Standard Problem (DSP-02) For A Code Assessment

    International Nuclear Information System (INIS)

    Kim, Yeonsik; Choi, Kiyong; Cho, Seok; Park, Hyunsik; Kang, Kyungho; Song, Chulhwa; Baek, Wonpil

    2013-01-01

    KAERI (Korea Atomic Energy Research Institute) has been operating an integral effect test facility, the Advanced Thermal-Hydraulic Test Loop for Accident Simulation (ATLAS), for transient and accident simulations of advanced pressurized water reactors (PWRs). Using ATLAS, a high-quality integral effect test database has been established for major design basis accidents of the APR1400 plant. A Domestic Standard Problem (DSP) exercise using the ATLAS database was promoted to transfer the database to domestic nuclear industries and contribute to improving a safety analysis methodology for PWRs. This 2 nd ATLAS DSP (DSP-02) exercise aims at an effective utilization of an integral effect database obtained from ATLAS, the establishment of a cooperation framework among the domestic nuclear industry, a better understanding of the thermal hydraulic phenomena, and an investigation into the possible limitation of the existing best-estimate safety analysis codes. A small break loss of coolant accident with a 6-inch break at the cold leg was determined as a target scenario by considering its technical importance and by incorporating interests from participants. This DSP exercise was performed in an open calculation environment where the integral effect test data was open to participants prior to the code calculations. This paper includes major information of the DSP-02 exercise as well as comparison results between the calculations and the experimental data

  7. SECOND ATLAS DOMESTIC STANDARD PROBLEM (DSP-02 FOR A CODE ASSESSMENT

    Directory of Open Access Journals (Sweden)

    YEON-SIK KIM

    2013-12-01

    Full Text Available KAERI (Korea Atomic Energy Research Institute has been operating an integral effect test facility, the Advanced Thermal-Hydraulic Test Loop for Accident Simulation (ATLAS, for transient and accident simulations of advanced pressurized water reactors (PWRs. Using ATLAS, a high-quality integral effect test database has been established for major design basis accidents of the APR1400 plant. A Domestic Standard Problem (DSP exercise using the ATLAS database was promoted to transfer the database to domestic nuclear industries and contribute to improving a safety analysis methodology for PWRs. This 2nd ATLAS DSP (DSP-02 exercise aims at an effective utilization of an integral effect database obtained from ATLAS, the establishment of a cooperation framework among the domestic nuclear industry, a better understanding of the thermal hydraulic phenomena, and an investigation into the possible limitation of the existing best-estimate safety analysis codes. A small break loss of coolant accident with a 6-inch break at the cold leg was determined as a target scenario by considering its technical importance and by incorporating interests from participants. This DSP exercise was performed in an open calculation environment where the integral effect test data was open to participants prior to the code calculations. This paper includes major information of the DSP-02 exercise as well as comparison results between the calculations and the experimental data.

  8. The Next Generation ARC Middleware and ATLAS Computing Model

    CERN Document Server

    Filipcic, A; The ATLAS collaboration; Smirnova, O; Konstantinov, A; Karpenko, D

    2012-01-01

    The distributed NDGF Tier-1 and associated Nordugrid clusters are well integrated into the ATLAS computing model but follow a slightly different paradigm than other ATLAS resources. The current strategy does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC middleware with its several new technologies provides new possibilities in development of the ATLAS computing model, such as pilot jobs with pre-cached input files, automatic job migration between the sites, integration of remote sites without connected storage elements, and automatic brokering for jobs with non-standard resource requirements. ARC's data transfer model provides an automatic way for the computing sites to participate in ATLAS' global task management system without requiring centralised brokering or data transfer services. The powerful API combined with Python and Java bindings can easily be used to build new ...

  9. Comparison report of open calculations for ATLAS Domestic Standard Problem (DSP 02)

    International Nuclear Information System (INIS)

    Choi, Ki Yong; Kim, Y. S.; Kang, K. H.; Cho, S.; Park, H. S.; Choi, N. H.; Kim, B. D.; Min, K. H.; Park, J. K.; Chun, H. G.; Yu, Xin Guo; Kim, H. T.; Song, C. H.; Sim, S. K.; Jeon, S. S.; Kim, S. Y.; Kang, D. G.; Choi, T. S.; Kim, Y. M.; Lim, S. G.; Kim, H. S.; Kang, D. H.; Lee, G. H.; Jang, M. J.

    2012-09-01

    KAERI (Korea Atomic Energy Research Institute) has been operating an integral effect test facility, the Advanced Thermal Hydraulic Test Loop for Accident Simulation (ATLAS) for transient and accident simulations of advanced pressurized water reactors (PWRs). By using the ATLAS, a high quality integral effect test database has been established for major design basis accidents of the APR1400. A Domestic Standard Problem (DSP) exercise using the ATLAS database was promoted in order to transfer the database to domestic nuclear industries and to contribute to improving safety analysis methodology for PWRs. This 2nd ATLAS DSP exercise was led by KAERI in collaboration with KINS since the successful completion of the 1st ATLAS DSP in 2009. This exercise aims at effective utilization of integral effect database obtained from the ATLAS, establishment of cooperation framework among the domestic nuclear industry, better understanding of thermal hydraulic phenomena, and investigation of the possible limitation of the existing best estimate safety analysis codes. A small break loss of coolant accident of 6 inch break at the cold leg was determined as a target scenario by considering its technical importance and by incorporating with interests from participants. Twelve domestic organizations joined this DSP 02 exercise. Finally, eleven out of the joined organizations submitted their calculation results, including universities, government, and nuclear industries. This DSP exercise was performed in an open calculation environment where the integral effect test data was open to participants prior to code calculations. This report includes all information of the 2nd ATLAS DSP (DSP 02) exercise as well as comparison results between the calculations and the experimental data

  10. Two ATLAS suppliers honoured

    CERN Multimedia

    2007-01-01

    The ATLAS experiment has recognised the outstanding contribution of two firms to the pixel detector. Recipients of the supplier award with Peter Jenni, ATLAS spokesperson, and Maximilian Metzger, CERN Secretary-General.At a ceremony held at CERN on 28 November, the ATLAS collaboration presented awards to two of its suppliers that had produced sensor wafers for the pixel detector. The CiS Institut für Mikrosensorik of Erfurt in Germany has supplied 655 sensor wafers containing a total of 1652 sensor tiles and the firm ON Semiconductor has supplied 515 sensor wafers (1177 sensor tiles) from its foundry at Roznov in the Czech Republic. Both firms have successfully met the very demanding requirements. ATLAS’s huge pixel detector is very complicated, requiring expertise in highly specialised integrated microelectronics and precision mechanics. Pixel detector project leader Kevin Einsweiler admits that when the project was first propo...

  11. The ATLAS IBL CO2 Cooling System

    CERN Document Server

    Verlaat, Bartholomeus; The ATLAS collaboration

    2016-01-01

    The Atlas Pixel detector has been equipped with an extra B-layer in the space obtained by a reduced beam pipe. This new pixel detector called the ATLAS Insertable B-Layer (IBL) is installed in 2014 and is operational in the current ATLAS data taking. The IBL detector is cooled with evaporative CO2 and is the first of its kind in ATLAS. The ATLAS IBL CO2 cooling system is designed for lower temperature operation (<-35⁰C) than the previous developed CO2 cooling systems in High Energy Physics experiments. The cold temperatures are required to protect the pixel sensors for the high expected radiation dose up to 550 fb^-1 integrated luminosity. This paper describes the design, development, construction and commissioning of the IBL CO2 cooling system. It describes the challenges overcome and the important lessons learned for the development of future systems which are now under design for the Phase-II upgrade detectors.

  12. Quantitative Evaluation of Atlas-based Attenuation Correction for Brain PET in an Integrated Time-of-Flight PET/MR Imaging System.

    Science.gov (United States)

    Yang, Jaewon; Jian, Yiqiang; Jenkins, Nathaniel; Behr, Spencer C; Hope, Thomas A; Larson, Peder E Z; Vigneron, Daniel; Seo, Youngho

    2017-07-01

    Purpose To assess the patient-dependent accuracy of atlas-based attenuation correction (ATAC) for brain positron emission tomography (PET) in an integrated time-of-flight (TOF) PET/magnetic resonance (MR) imaging system. Materials and Methods Thirty recruited patients provided informed consent in this institutional review board-approved study. All patients underwent whole-body fluorodeoxyglucose PET/computed tomography (CT) followed by TOF PET/MR imaging. With use of TOF PET data, PET images were reconstructed with four different attenuation correction (AC) methods: PET with patient CT-based AC (CTAC), PET with ATAC (air and bone from an atlas), PET with ATAC patientBone (air and tissue from the atlas with patient bone), and PET with ATAC boneless (air and tissue from the atlas without bone). For quantitative evaluation, PET mean activity concentration values were measured in 14 1-mL volumes of interest (VOIs) distributed throughout the brain and statistical significance was tested with a paired t test. Results The mean overall difference (±standard deviation) of PET with ATAC compared with PET with CTAC was -0.69 kBq/mL ± 0.60 (-4.0% ± 3.2) (P PET with ATAC boneless (-9.4% ± 3.7) was significantly worse than that of PET with ATAC (-4.0% ± 3.2) (P PET with ATAC patientBone (-1.5% ± 1.5) improved over that of PET with ATAC (-4.0% ± 3.2) (P PET/MR imaging achieves similar quantification accuracy to that from CTAC by means of atlas-based bone compensation. However, patient-specific anatomic differences from the atlas causes bone attenuation differences and misclassified sinuses, which result in patient-dependent performance variation of ATAC. © RSNA, 2017 Online supplemental material is available for this article.

  13. LUCID: the ATLAS Luminosity Detector

    CERN Document Server

    Fabbri, Laura; The ATLAS collaboration

    2018-01-01

    A precise measurement of luminosity is a key component of the ATLAS program: its uncertainty is a systematics for all cross-section measurements, from Standard Model processes to new discoveries, and for some precise measurements it can be dominant. To be predictive a precision compatible with PDF uncertainty ( 1-2%) is desired. LUCID (LUminosity Cherenkov Integrating Detector) is sensitive to charged particles generated by the pp collisions. It is the only ATLAS dedicated detector for this purpose and the referred one during the second run of LHC data taking.

  14. Integration of omic networks in a developmental atlas of maize.

    Science.gov (United States)

    Walley, Justin W; Sartor, Ryan C; Shen, Zhouxin; Schmitz, Robert J; Wu, Kevin J; Urich, Mark A; Nery, Joseph R; Smith, Laurie G; Schnable, James C; Ecker, Joseph R; Briggs, Steven P

    2016-08-19

    Coexpression networks and gene regulatory networks (GRNs) are emerging as important tools for predicting functional roles of individual genes at a system-wide scale. To enable network reconstructions, we built a large-scale gene expression atlas composed of 62,547 messenger RNAs (mRNAs), 17,862 nonmodified proteins, and 6227 phosphoproteins harboring 31,595 phosphorylation sites quantified across maize development. Networks in which nodes are genes connected on the basis of highly correlated expression patterns of mRNAs were very different from networks that were based on coexpression of proteins. Roughly 85% of highly interconnected hubs were not conserved in expression between RNA and protein networks. However, networks from either data type were enriched in similar ontological categories and were effective in predicting known regulatory relationships. Integration of mRNA, protein, and phosphoprotein data sets greatly improved the predictive power of GRNs. Copyright © 2016, American Association for the Advancement of Science.

  15. Integration of the trigger and data acquisition systems in ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Abolins, M [Michigan State University, Department of Physics and Astronomy, East Lansing, Michigan (United States); Adragna, P [Department of Physics, Queen Mary and Westfield College, University of London, London (United Kingdom); Aleksandrov, E; Aleksandrov, I [Joint Institute for Nuclear Research, Dubna (Russian Federation); Amorim, A [Laboratorio de Instrumentacao e Fisica Experimental, Lisboa (Portugal); Anderson, K [University of Chicago, Enrico Fermi Institute, Chicago, Illinois (United States); Anduaga, X [National University of La Plata, La Plata (United States); Aracena, I; Bartoldus, R [Stanford Linear Accelerator Center (SLAC), Stanford (United States); Asquith, L [Department of Physics and Astronomy, University College London, London (United Kingdom); Avolio, G; Backlund, S [European Laboratory for Particle Physics (CERN), Geneva (Switzerland); Badescu, E [National Institute for Physics and Nuclear Engineering, Institute of Atomic Physics, Bucharest (Romania); Baines, J [Rutherford Appleton Laboratory, Chilton, Didcot (United Kingdom); Beck, H P [Laboratory for High Energy Physics, University of Bern, Bern (Switzerland); Bee, C [Centre de Physique des Particules de Marseille, IN2P3-CNRS, Marseille (France); Bell, P [Department of Physics and Astronomy, University of Manchester, Manchester (United Kingdom); Bell, W H [Department of Physics and Astronomy, University of Glasgow, Glasgow (United Kingdom); Barria, P; Batreanu, S [and others

    2008-07-01

    During 2006 and the first half of 2007, the installation, integration and commissioning of trigger and data acquisition (TDAQ) equipment in the ATLAS experimental area have progressed. There have been a series of technical runs using the final components of the system already installed in the experimental area. Various tests have been run including ones where level 1 preselected simulated proton-proton events have been processed in a loop mode through the trigger and dataflow chains. The system included the readout buffers containing the events, event building, level 2 and event filter trigger algorithms. The scalability of the system with respect to the number of event building nodes used has been studied and quantities critical for the final system, such as trigger rates and event processing times, have been measured using different trigger algorithms as well as different TDAQ components. This paper presents the TDAQ architecture, the current status of the installation and commissioning and highlights the main test results that validate the system.

  16. Integration of the trigger and data acquisition systems in ATLAS

    International Nuclear Information System (INIS)

    Abolins, M; Adragna, P; Aleksandrov, E; Aleksandrov, I; Amorim, A; Anderson, K; Anduaga, X; Aracena, I; Bartoldus, R; Asquith, L; Avolio, G; Backlund, S; Badescu, E; Baines, J; Beck, H P; Bee, C; Bell, P; Bell, W H; Barria, P; Batreanu, S

    2008-01-01

    During 2006 and the first half of 2007, the installation, integration and commissioning of trigger and data acquisition (TDAQ) equipment in the ATLAS experimental area have progressed. There have been a series of technical runs using the final components of the system already installed in the experimental area. Various tests have been run including ones where level 1 preselected simulated proton-proton events have been processed in a loop mode through the trigger and dataflow chains. The system included the readout buffers containing the events, event building, level 2 and event filter trigger algorithms. The scalability of the system with respect to the number of event building nodes used has been studied and quantities critical for the final system, such as trigger rates and event processing times, have been measured using different trigger algorithms as well as different TDAQ components. This paper presents the TDAQ architecture, the current status of the installation and commissioning and highlights the main test results that validate the system

  17. Integration of the Trigger and Data Acquisition Systems in ATLAS

    International Nuclear Information System (INIS)

    Abolins, M.; Adragna, P.; Aleksandrov, E.; Aleksandrov, I.; Amorim, A.; Anderson, K.; Anduaga, X.; Aracena, I.; Asquith, L.; Avolio, G.; Backlund, S.; Badescu, E.; Baines, J.; Barria, P.; Bartoldus, R.; Batreanu, S.; Beck, H.P.; Bee, C.; Bell, P.; Bell, W.H.; Bellomo, M.

    2011-01-01

    During 2006 and the first half of 2007, the installation, integration and commissioning of trigger and data acquisition (TDAQ) equipment in the ATLAS experimental area have progressed. There have been a series of technical runs using the final components of the system already installed in the experimental area. Various tests have been run including ones where level 1 preselected simulated proton-proton events have been processed in a loop mode through the trigger and dataflow chains. The system included the readout buffers containing the events, event building, level 2 and event filter trigger algorithms. The scalability of the system with respect to the number of event building nodes used has been studied and quantities critical for the final system, such as trigger rates and event processing times, have been measured using different trigger algorithms as well as different TDAQ components. This paper presents the TDAQ architecture, the current status of the installation and commissioning and highlights the main test results that validate the system.

  18. Development, Validation and Integration of the ATLAS Trigger System Software in Run 2

    CERN Document Server

    Keyes, Robert; The ATLAS collaboration

    2016-01-01

    The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware and software, associated to various sub-detectors that must seamlessly cooperate in order to select 1 collision of interest out of every 40,000 delivered by the LHC every millisecond. This talk will discuss the challenges, workflow and organization of the ongoing trigger software development, validation and deployment. This development, from the top level integration and configuration to the individual components responsible for each sub system, is done to ensure that the most up to date algorithms are used to optimize the performance of the experiment. This optimization hinges on the reliability and predictability of the software performance, which is why validation is of the utmost importance. The software adheres to a hierarchical release structure, with newly validated releases propagating upwards. Integration tests are carried out on a daily basis to ensure that the releases deployed to the online trigger farm duri...

  19. Data federation strategies for ATLAS using XRootD

    International Nuclear Information System (INIS)

    Gardner, Robert; Vukotic, Ilija; Campana, Simone; Iven, Jan; Duckeck, Guenter; Elmsheuser, Johannes; Hönig, Friedrich G; Legger, Federica; Hanushevsky, Andrew; Yang, Wei

    2014-01-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the wide area network and staging of remote data files to local disk. To support job-brokering decisions, a time-dependent cost-of-data-access matrix is made taking into account network performance and key site performance factors. The system's response to production-scale physics analysis workloads, either from individual end-users or ATLAS analysis services, is discussed.

  20. Probabilistic atlas-based segmentation of combined T1-weighted and DUTE MRI for calculation of head attenuation maps in integrated PET/MRI scanners.

    Science.gov (United States)

    Poynton, Clare B; Chen, Kevin T; Chonde, Daniel B; Izquierdo-Garcia, David; Gollub, Randy L; Gerstner, Elizabeth R; Batchelor, Tracy T; Catana, Ciprian

    2014-01-01

    We present a new MRI-based attenuation correction (AC) approach for integrated PET/MRI systems that combines both segmentation- and atlas-based methods by incorporating dual-echo ultra-short echo-time (DUTE) and T1-weighted (T1w) MRI data and a probabilistic atlas. Segmented atlases were constructed from CT training data using a leave-one-out framework and combined with T1w, DUTE, and CT data to train a classifier that computes the probability of air/soft tissue/bone at each voxel. This classifier was applied to segment the MRI of the subject of interest and attenuation maps (μ-maps) were generated by assigning specific linear attenuation coefficients (LACs) to each tissue class. The μ-maps generated with this "Atlas-T1w-DUTE" approach were compared to those obtained from DUTE data using a previously proposed method. For validation of the segmentation results, segmented CT μ-maps were considered to the "silver standard"; the segmentation accuracy was assessed qualitatively and quantitatively through calculation of the Dice similarity coefficient (DSC). Relative change (RC) maps between the CT and MRI-based attenuation corrected PET volumes were also calculated for a global voxel-wise assessment of the reconstruction results. The μ-maps obtained using the Atlas-T1w-DUTE classifier agreed well with those derived from CT; the mean DSCs for the Atlas-T1w-DUTE-based μ-maps across all subjects were higher than those for DUTE-based μ-maps; the atlas-based μ-maps also showed a lower percentage of misclassified voxels across all subjects. RC maps from the atlas-based technique also demonstrated improvement in the PET data compared to the DUTE method, both globally as well as regionally.

  1. ATLAS Facility and Instrumentation Description Report

    International Nuclear Information System (INIS)

    Kang, Kyoung Ho; Moon, Sang Ki; Park, Hyun Sik

    2009-06-01

    A thermal-hydraulic integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been constructed at KAERI (Korea Atomic Energy Research Institute). The ATLAS is a half-height and 1/288-volume scaled test facility with respect to the APR1400. The fluid system of the ATLAS consists of a primary system, a secondary system, a safety injection system, a break simulating system, a containment simulating system, and auxiliary systems. The primary system includes a reactor vessel, two hot legs, four cold legs, a pressurizer, four reactor coolant pumps, and two steam generators. The secondary system of the ATLAS is simplified to be of a circulating looptype. Most of the safety injection features of the APR1400 and the OPR1000 are incorporated into the safety injection system of the ATLAS. In the ATLAS test facility, about 1300 instrumentations are installed to precisely investigate the thermal-hydraulic behavior in simulation of the various test scenarios. This report describes the scaling methodology, the geometric data of the individual component, and the specification and the location of the instrumentations which are specific to the simulation of 50% DVI line break accident of the APR1400 for supporting the 50 th OECD/NEA International Standard Problem Exercise (ISP-50)

  2. ATLAS Upgrade Plans

    CERN Document Server

    Hopkins, W; The ATLAS collaboration

    2014-01-01

    After the successful LHC operation at the center-of-mass energies of 7 and 8 TeV in 2010-2012, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity leveling. The final goal is to extend the dataset from about few hundred fb−1 expected for LHC running to 3000/fb by around 2035 for ATLAS and CMS. In parallel, the experiments need to be keep lockstep with the accelerator to accommodate running beyond the nominal luminosity this decade. Current planning in ATLAS envisions significant upgrades to the detector during the consolidation of the LHC to reach full LHC energy and further upgrades. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for a new...

  3. Advanced Technology Lifecycle Analysis System (ATLAS)

    Science.gov (United States)

    O'Neil, Daniel A.; Mankins, John C.

    2004-01-01

    Developing credible mass and cost estimates for space exploration and development architectures require multidisciplinary analysis based on physics calculations, and parametric estimates derived from historical systems. Within the National Aeronautics and Space Administration (NASA), concurrent engineering environment (CEE) activities integrate discipline oriented analysis tools through a computer network and accumulate the results of a multidisciplinary analysis team via a centralized database or spreadsheet Each minute of a design and analysis study within a concurrent engineering environment is expensive due the size of the team and supporting equipment The Advanced Technology Lifecycle Analysis System (ATLAS) reduces the cost of architecture analysis by capturing the knowledge of discipline experts into system oriented spreadsheet models. A framework with a user interface presents a library of system models to an architecture analyst. The analyst selects models of launchers, in-space transportation systems, and excursion vehicles, as well as space and surface infrastructure such as propellant depots, habitats, and solar power satellites. After assembling the architecture from the selected models, the analyst can create a campaign comprised of missions spanning several years. The ATLAS controller passes analyst specified parameters to the models and data among the models. An integrator workbook calls a history based parametric analysis cost model to determine the costs. Also, the integrator estimates the flight rates, launched masses, and architecture benefits over the years of the campaign. An accumulator workbook presents the analytical results in a series of bar graphs. In no way does ATLAS compete with a CEE; instead, ATLAS complements a CEE by ensuring that the time of the experts is well spent Using ATLAS, an architecture analyst can perform technology sensitivity analysis, study many scenarios, and see the impact of design decisions. When the analyst is

  4. AGIS: Evolution of Distributed Computing information system for ATLAS

    Science.gov (United States)

    Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.

    2015-12-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  5. Prospects and Results from the AFP Detector in ATLAS

    CERN Document Server

    Gach, Grzegorz; The ATLAS collaboration

    2016-01-01

    Status of the AFP project in the ATLAS experiment is given. In 2016 one arm of the AFP detector was installed and first data have been taken. In parallel with integration of the AFP subdetector into the ATLAS TDAQ nad DCS, beam tests and preparations for the installation of the 2nd arm are performed.

  6. Event filter monitoring with the ATLAS tile calorimeter

    CERN Document Server

    Fiorini, L

    2008-01-01

    The ATLAS Tile Calorimeter detector is presently involved in an intense phase of subsystems integration and commissioning with muons of cosmic origin. Various monitoring programs have been developed at different levels of the data flow to tune the set-up of the detector running conditions and to provide a fast and reliable assessment of the data quality already during data taking. This paper focuses on the monitoring system integrated in the highest level of the ATLAS trigger system, the Event Filter, and its deployment during the Tile Calorimeter commissioning with cosmic ray muons. The key feature of Event Filter monitoring is the capability of performing detector and data quality control on complete physics events at the trigger level, hence before events are stored on disk. In ATLAS' online data flow, this is the only monitoring system capable of giving a comprehensive event quality feedback.

  7. Physics with Tau Lepton Final States in ATLAS

    Directory of Open Access Journals (Sweden)

    Pingel Almut M.

    2013-05-01

    Full Text Available The ATLAS detector records collisions from two high-energetic proton beams circulating in the LHC. An integral part of the ATLAS physics program are analyses with tau leptons in the final state. Here an overview is given over the studies done in ATLAS with hadronically-decaying final state tau leptons: Standard Model cross-section measurements of Z → ττ, W → τν and tt̅ → bb̅ e/μν τhadν; τ polarization measurements in W → τν decays; Higgs searches and various searches for physics beyond the Standard Model.

  8. The zero degree calorimeter for the ATLAS experiment

    International Nuclear Information System (INIS)

    Leite, Marco

    2009-01-01

    Full text. The Zero Degree Calorimeter (ZDC) of the ATLAS experiment at the LHC will measure neutral particles (photons and neutrons) produced at very forward directions in heavy ions and low luminosity p + p collisions. While its main application will be the determination of the centrality of the heavy ions collisions and trigger integration in ATLAS, the design of the ZDC also provides many other interesting heavy ion physics possibilities, like the measurements of the direct flow (by directly measuring the reaction plane formed by the spectator neutrons transverse momentum), ultra-peripheral quarkonia photo-production etc. During low luminosity p+p runs, the ZDC will give valuable information about forward neutron and neutral mesons cross-section production at the LHC energies. The ZDC will also be used in independent luminosity measurements during the early stages of the LHC operation, helping to achieve a better understanding of the standard ATLAS luminosity monitor system (LUCID). The ZDC comprises two sampling calorimeter modules, symmetrically located along the beam line and each one separated 140m from the ATLAS interaction point. This is the region where the accelerator neutral beam absorbers are installed, and the ZDC is strategically inserted inside a slot in these absorbers, extending the ATLAS pseudo-rapidity calorimeter coverage to |η| > 8. Each ZDC module is divided in 4 sections: one electromagnetic followed by three hadronic sections. Built using Tungsten absorber blocs interspersed by quartz fibers for the sampling of the shower, each one of these modules provides energy measurements of the incident particles. The electromagnetic and the first hadronic section can also perform position measurements perpendicular to the projected beam direction due to their segmentation. Instrumenting this realm presents several challenges due to the extremely high radiation levels. To account for the large energy dynamic range (14 bits equivalent), a combination

  9. Integrating Network Awareness in ATLAS Distributed Computing Using the ANSE Project

    CERN Document Server

    Klimentov, Alexei; The ATLAS collaboration; Petrosyan, Artem; Batista, Jorge Horacio; Mc Kee, Shawn Patrick

    2015-01-01

    A crucial contributor to the success of the massively scaled global computing system that delivers the analysis needs of the LHC experiments is the networking infrastructure upon which the system is built. The experiments have been able to exploit excellent high-bandwidth networking in adapting their computing models for the most efficient utilization of resources. New advanced networking technologies now becoming available such as software defined networking hold the potential of further leveraging the network to optimize workflows and dataflows, through proactive control of the network fabric on the part of high level applications such as experiment workload management and data management systems. End to end monitoring of networks using perfSONAR combined with data flow performance metrics further allows applications to adapt based on real time conditions. We will describe efforts underway in ATLAS on integrating network awareness at the application level, particularly in workload management, building upon ...

  10. The geosystems of complex geographical atlases

    Directory of Open Access Journals (Sweden)

    Jovanović Jasmina

    2012-01-01

    Full Text Available Complex geographical atlases represent geosystems of different hierarchical rank, complexity and diversity, scale and connection. They represent a set of large number of different pieces of information about geospace. Also, they contain systematized, correlative and in the apparent form represented pieces of information about space. The degree of information revealed in the atlas is precisely explained by its content structure and the form of presentation. The quality of atlas depends on the method of visualization of data and the quality of geodata. Cartographic visualization represents cognitive process. The analysis converts geospatial data into knowledge. A complex geographical atlas represents information complex of spatial - temporal coordinated database on geosystems of different complexity and territorial scope. Each geographical atlas defines a concrete geosystem. Systemic organization (structural and contextual determines its complexity and concreteness. In complex atlases, the attributes of geosystems are modeled and pieces of information are given in systematized, graphically unique form. The atlas can be considered as a database. In composing a database, semantic analysis of data is important. The result of semantic modeling is expressed in structuring of data information, in emphasizing logic connections between phenomena and processes and in defining their classes according to the degree of similarity. Accordingly, the efficiency of research of needed pieces of information in the process of the database use is enabled. An atlas map has a special power to integrate sets of geodata and present information contents in user - friendly and understandable visual and tactile way using its visual ability. Composing an atlas by systemic cartography requires the pieces of information on concrete - defined geosystems of different hierarchical level, the application of scientific methods and making of adequate number of analytical, synthetic

  11. New ATLAS Higgs physics results

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    New Higgs physics results from the ATLAS experiment using the full Run-1 LHC dataset, corresponding to an integrated luminosity of approximately 25 fb-1, of proton-proton collisions at 7 TeV and 8 TeV, will be presented.

  12. ATLAS computing on CSCS HPC

    Science.gov (United States)

    Filipcic, A.; Haug, S.; Hostettler, M.; Walker, R.; Weber, M.

    2015-12-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was the highest ranked European system on TOP500 in 2014, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, a partial GPU acceleration of the Geant4 detector simulations has been implemented.

  13. Semiconductor tracker final integration and commissioning in the ATLAS detector

    International Nuclear Information System (INIS)

    Robichaud-Veronneau, Andree

    2008-01-01

    The SemiConductor Tracker (SCT) is part of the Inner Detector of the ATLAS experiment at the LHC. It is located between the Pixel detector and the Transition Radiation Tracker (TRT). During 2006 and 2007, the SCT was installed in its final position inside the ATLAS detector. The SCT barrel was lowered in 2006 and was tested for connectivity and noise. Common tests with the TRT to look for pick-up noise and grounding issues were also performed. The SCT end-caps were installed during summer 2007 and will undergo similar checks. The results from the various tests done before and after installation will be presented here.

  14. Large Scale Software Building with CMake in ATLAS

    Science.gov (United States)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration

    2017-10-01

    The offline software of the ATLAS experiment at the Large Hadron Collider (LHC) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector’s trigger system to select LHC collision events during data taking. The ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows, many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications also require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the above mentioned software packages. This also makes it possible to develop and test new and modified packages on top of existing releases. The system also allows one to detect and execute partial rebuilds of the release based on single package changes. The build system makes use of CPack for building RPM packages out of the software releases, and CTest for running unit and integration tests. We report on the migration and integration of the ATLAS software to CMake and show working examples of this large scale project in production.

  15. ATLAS & Google — "Data Ocean" R&D Project

    CERN Document Server

    The ATLAS collaboration

    2017-01-01

    ATLAS is facing several challenges with respect to their computing requirements for LHC Run-3 (2020-2023) and HL-LHC runs (2025-2034). The challenges are not specific for ATLAS or/and LHC, but common for HENP computing community. Most importantly, storage continues to be the driving cost factor and at the current growth rate cannot absorb the increased physics output of the experiment. Novel computing models with a more dynamic use of storage and computing resources need to be considered. This project aims to start an R&D project for evaluating and adopting novel IT technologies for HENP computing. ATLAS and Google plan to launch an R&D project to integrate Google cloud resources (Storage and Compute) to the ATLAS distributed computing environment. After a series of teleconferences, a face-to-face brainstorming meeting in Denver, CO at the Supercomputing 2017 conference resulted in this proposal for a first prototype of the "Data Ocean" project. The idea is threefold: (a) to allow ATLAS to explore the...

  16. A module concept for the upgrades of the ATLAS pixel system using the novel SLID-ICV vertical integration technology

    Energy Technology Data Exchange (ETDEWEB)

    Beimforde, M; Andricek, L; Macchiolo, A; Moser, H-G; Nisius, R; Richter, R H; Weigell, P, E-mail: Michael.Beimforde@mpp.mpg.de [Max-Planck-Institut fuer Physik, Foehringer Ring 6, D-80805, Muenchen (Germany)

    2010-12-15

    The presented R and D activity is focused on the development of a new pixel module concept for the foreseen upgrades of the ATLAS detector towards the Super LHC employing thin n-in-p silicon sensors together with a novel vertical integration technology. A first set of pixel sensors with active thicknesses of 75 {mu}m and 150 {mu}m has been produced using a thinning technique developed at the Max-Planck-Institut fuer Physik (MPP) and the MPI Semiconductor Laboratory (HLL). Charge Collection Efficiency (CCE) measurements of these sensors irradiated with 26 MeV protons up to a particle fluence of 10{sup 16}n{sub eq}cm{sup -2} have been performed, yielding higher values than expected from the present radiation damage models. The novel integration technology, developed by the Fraunhofer Institut EMFT, consists of the Solid-Liquid InterDiffusion (SLID) interconnection, being an alternative to the standard solder bump-bonding, and Inter-Chip Vias (ICVs) for routing signals vertically through electronics. This allows for extracting the digitized signals from the back side of the readout chips, avoiding wire-bonding cantilevers at the edge of the devices and thus increases the active area fraction. First interconnections have been performed with wafers containing daisy chains to investigate the efficiency of SLID at wafer-to-wafer and chip-to-wafer level. In a second interconnection process the present ATLAS FE-I3 readout chips were connected to dummy sensor wafers at chip-to-wafer level. Preparations of ICV within the ATLAS readout chips for back side contacting and the future steps towards a full demonstrator module will be presented.

  17. A module concept for the upgrades of the ATLAS pixel system using the novel SLID-ICV vertical integration technology

    International Nuclear Information System (INIS)

    Beimforde, M; Andricek, L; Macchiolo, A; Moser, H-G; Nisius, R; Richter, R H; Weigell, P

    2010-01-01

    The presented R and D activity is focused on the development of a new pixel module concept for the foreseen upgrades of the ATLAS detector towards the Super LHC employing thin n-in-p silicon sensors together with a novel vertical integration technology. A first set of pixel sensors with active thicknesses of 75 μm and 150 μm has been produced using a thinning technique developed at the Max-Planck-Institut fuer Physik (MPP) and the MPI Semiconductor Laboratory (HLL). Charge Collection Efficiency (CCE) measurements of these sensors irradiated with 26 MeV protons up to a particle fluence of 10 16 n eq cm -2 have been performed, yielding higher values than expected from the present radiation damage models. The novel integration technology, developed by the Fraunhofer Institut EMFT, consists of the Solid-Liquid InterDiffusion (SLID) interconnection, being an alternative to the standard solder bump-bonding, and Inter-Chip Vias (ICVs) for routing signals vertically through electronics. This allows for extracting the digitized signals from the back side of the readout chips, avoiding wire-bonding cantilevers at the edge of the devices and thus increases the active area fraction. First interconnections have been performed with wafers containing daisy chains to investigate the efficiency of SLID at wafer-to-wafer and chip-to-wafer level. In a second interconnection process the present ATLAS FE-I3 readout chips were connected to dummy sensor wafers at chip-to-wafer level. Preparations of ICV within the ATLAS readout chips for back side contacting and the future steps towards a full demonstrator module will be presented.

  18. ATLAS RPC performance on a dedicated cosmic ray test-stand

    International Nuclear Information System (INIS)

    Liberti, B.; Aielli, G.; Camarri, P.; Cardarelli, R.; Corradi, M.; Di Ciaccio, A.; Di Stante, L.; Palummo, L.; Pastori, E.; Salamon, A.; Santonico, R.; Solfaroli, E.

    2008-01-01

    596 RPC chambers have been assembled in the ATLAS Muon Spectrometer, covering a 7300 m 2 sensitive area with 355.000 read out channels. 1116 RPC Units were produced and tested before integration and installation on the experiment [A. Aloisio et al., 'The trigger chambers of the ATLAS muon spectrometer: production and tests', Nuclear Instruments and Methods A535 (2004) 265-271]. 192 ATLAS RPCs, the Barrel Outer Large (BOL) units were tested in INFN Roma Tor Vergata test stand

  19. Distributed analysis in ATLAS using GANGA

    International Nuclear Information System (INIS)

    Elmsheuser, Johannes; Brochu, Frederic; Egede, Ulrik; Reece, Will; Williams, Michael; Gaidioz, Benjamin; Maier, Andrew; Moscicki, Jakub; Vanderster, Daniel; Lee, Hurng-Chun; Pajchel, Katarina; Samset, Bjorn; Slater, Mark; Soroko, Alexander; Cowan, Greig

    2010-01-01

    Distributed data analysis using Grid resources is one of the fundamental applications in high energy physics to be addressed and realized before the start of LHC data taking. The needs to manage the resources are very high. In every experiment up to a thousand physicists will be submitting analysis jobs to the Grid. Appropriate user interfaces and helper applications have to be made available to assure that all users can use the Grid without expertise in Grid technology. These tools enlarge the number of Grid users from a few production administrators to potentially all participating physicists. The GANGA job management system (http://cern.ch/ganga), developed as a common project between the ATLAS and LHCb experiments, provides and integrates these kind of tools. GANGA provides a simple and consistent way of preparing, organizing and executing analysis tasks within the experiment analysis framework, implemented through a plug-in system. It allows trivial switching between running test jobs on a local batch system and running large-scale analyzes on the Grid, hiding Grid technicalities. We will be reporting on the plug-ins and our experiences of distributed data analysis using GANGA within the ATLAS experiment. Support for all Grids presently used by ATLAS, namely the LCG/EGEE, NDGF/NorduGrid, and OSG/PanDA is provided. The integration and interaction with the ATLAS data management system DQ2 into GANGA is a key functionality. An intelligent job brokering is set up by using the job splitting mechanism together with data-set and file location knowledge. The brokering is aided by an automated system that regularly processes test analysis jobs at all ATLAS DQ2 supported sites. Large numbers of analysis jobs can be sent to the locations of data following the ATLAS computing model. GANGA supports amongst other things tasks of user analysis with reconstructed data and small scale production of Monte Carlo data.

  20. Commissioning of ATLAS

    CERN Document Server

    Thomas, J

    2008-01-01

    The status of the commissioning of the ATLAS experiment as of May 2008 is presented. The subdetector integration in recent milestone weeks is described, especially the cosmic commissioning in milestone week M6, focussing on combined running and track analysis of the muon detector and inner detector. The liquid argon and tile calorimeters have achieved near-full operation, and are integrated with the calorimeter trigger. The High-Level-Trigger infrastructure is installed and algorithms tested in technical runs. Problems with the inner detector cooling compressors are being fixed.

  1. New ATLAS Software & Computing Organization

    CERN Multimedia

    Barberis, D

    Following the election by the ATLAS Collaboration Board of Dario Barberis (Genoa University/INFN) as Computing Coordinator and David Quarrie (LBNL) as Software Project Leader, it was considered necessary to modify the organization of the ATLAS Software & Computing ("S&C") project. The new organization is based upon the following principles: separation of the responsibilities for computing management from those of software development, with the appointment of a Computing Coordinator and a Software Project Leader who are both members of the Executive Board; hierarchical structure of responsibilities and reporting lines; coordination at all levels between TDAQ, S&C and Physics working groups; integration of the subdetector software development groups with the central S&C organization. A schematic diagram of the new organization can be seen in Fig.1. Figure 1: new ATLAS Software & Computing organization. Two Management Boards will help the Computing Coordinator and the Software Project...

  2. An Integration Framework Tool for ATCA Chassis in the ATLAS Detector Control System

    International Nuclear Information System (INIS)

    Reed, Robert Graham

    2015-01-01

    The Large Hadron Collider at CERN is scheduled to undergo a major upgrade in 2022. The ATLAS collaboration will do major modifications to the detector to account for the increased luminosity. More specifically, a large proportion of the current front-end electronics, on the Tile Calorimeter sub-detector, will be upgraded and relocated to the backend. A Demonstrator program has been established as a proof of principle. A new system will be required to house, manage and connect this new hardware. The proposed solution will be an Advanced Telecommunication Computing Architecture (ATCA) which will not only house but also allow advanced management features and control at a hardware level by integrating the ATCA chassis into the Detector Control System. (paper)

  3. Prospects and Results from the AFP Detector in ATLAS

    CERN Document Server

    Gach, Grzegorz; The ATLAS collaboration

    2017-01-01

    In 2016 one arm of the AFP detector was installed and first data have been taken. In parallel with integration of the AFP subdetector into the ATLAS TDAQ and DCS systems, beam tests and preparations for the installation of the 2$^{\\textrm{nd}}$ arm are performed. In this report, a status of the AFP project in the ATLAS experiment is discussed.

  4. Connecting imaging mass spectrometry and magnetic resonance imaging-based anatomical atlases for automated anatomical interpretation and differential analysis.

    Science.gov (United States)

    Verbeeck, Nico; Spraggins, Jeffrey M; Murphy, Monika J M; Wang, Hui-Dong; Deutch, Ariel Y; Caprioli, Richard M; Van de Plas, Raf

    2017-07-01

    Imaging mass spectrometry (IMS) is a molecular imaging technology that can measure thousands of biomolecules concurrently without prior tagging, making it particularly suitable for exploratory research. However, the data size and dimensionality often makes thorough extraction of relevant information impractical. To help guide and accelerate IMS data analysis, we recently developed a framework that integrates IMS measurements with anatomical atlases, opening up opportunities for anatomy-driven exploration of IMS data. One example is the automated anatomical interpretation of ion images, where empirically measured ion distributions are automatically decomposed into their underlying anatomical structures. While offering significant potential, IMS-atlas integration has thus far been restricted to the Allen Mouse Brain Atlas (AMBA) and mouse brain samples. Here, we expand the applicability of this framework by extending towards new animal species and a new set of anatomical atlases retrieved from the Scalable Brain Atlas (SBA). Furthermore, as many SBA atlases are based on magnetic resonance imaging (MRI) data, a new registration pipeline was developed that enables direct non-rigid IMS-to-MRI registration. These developments are demonstrated on protein-focused FTICR IMS measurements from coronal brain sections of a Parkinson's disease (PD) rat model. The measurements are integrated with an MRI-based rat brain atlas from the SBA. The new rat-focused IMS-atlas integration is used to perform automated anatomical interpretation and to find differential ions between healthy and diseased tissue. IMS-atlas integration can serve as an important accelerator in IMS data exploration, and with these new developments it can now be applied to a wider variety of animal species and modalities. This article is part of a Special Issue entitled: MALDI Imaging, edited by Dr. Corinna Henkel and Prof. Peter Hoffmann. Copyright © 2017. Published by Elsevier B.V.

  5. Electronics calibration board for the ATLAS liquid argon calorimeters

    International Nuclear Information System (INIS)

    Colas, J.; Dumont-Dayot, N.; Marchand, J.F.; Massol, N.; Perrodo, P.; Wingerter-Seez, I.; De La Taille, C.; Imbert, P.; Richer, J.P.; Seguin Moreau, N.; Serin, L.

    2008-01-01

    To calibrate the energy response of the ATLAS liquid argon calorimeter, an electronics calibration board has been designed; it delivers a signal whose shape is close to the calorimeter ionization current signal with amplitude up to 100 mA in 50 Ω with 16 bit dynamic range. The amplitude of this signal is designed to be uniform over all calorimeters channels, stable in time and with an integral linearity much better that the electronics readout. The various R and D phases and most of the difficulties met are discussed and illustrated by many measurements. The custom design circuits are described and the layout of the ATLAS calibration board presented. The procedure used to qualify the boards is explained and the performance obtained illustrated: a dynamic range up to 3 TeV in three energy scales with an integral linearity better than 0.1% in each of them, a response uniformity better than 0.2% and a stability better than 0.1%. The performance of the board is well within the ATLAS requirements. Finally, in situ measurements done on the ATLAS calorimeter are shown to validate these performances

  6. Mechanical Commissioning of the ATLAS Barrel Toroid Magnet

    CERN Document Server

    Foussat, A; Dudarev, A; Bajas, H; Védrine, P; Berriaud, C; Sun, Z; Sorbi, M

    2008-01-01

    ATLAS is a general-purpose detector designed to run at the highest luminosity at the CERN Large Hadron Collider. Its features include the 4 T Barrel Toroid magnet, the largest superconducting magnet (25 m long, 20 m diameter) that provides the magnetic field for the ATLAS muon spectrometer. The coils integrated at CERN, were tested individually at maximum current of 22 kA in 2005. Following the mechanical assembly of the Barrel Toroid in the ATLAS underground cavern, the test of the full Barrel Toroid was performed in October 2006. Further tests are foreseen at the end 2007 when the system will include the two End Cap Toroids (ECT). The paper gives an overview of the good mechanical test results achieved in comparison with model predictions and the experience gained in the mechanical behavior of the ATLAS Toroidal coils is discussed.

  7. ATLAS Detector Upgrade Prospects

    CERN Document Server

    Dobre, Monica; The ATLAS collaboration

    2016-01-01

    After the successful operation at the center-of-mass energies of 7 and 8 TeV in 2010 - 2012, the LHC is ramped up and successfully took data at the center-of-mass energies of 13 TeV in 2015. Meanwhile, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity leveling. The ultimate goal is to extend the dataset from about few hundred fb−1 expected for LHC running to 3000 fb−1 by around 2035 for ATLAS and CMS. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for a new all-silicon tracker, significant upgrades of the calorimeter and muon systems, as well as improved triggers and data acquisition. ATLAS is also examining potential benefits of extens...

  8. ATLAS detector upgrade prospects

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00184940; The ATLAS collaboration

    2017-01-01

    After the successful operation at the centre-of-mass energies of 7 and 8 TeV in 2010-2012, the LHC is ramped up and successfully took data at the centre-of-mass energies of 13 TeV in 2015. Meanwhile, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity levelling. The ultimate goal is to extend the dataset from about few hundred fb$^{-1}$ expected for LHC running to 3000 fb $^{-1}$ by around 2035 for ATLAS and CMS. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for a new all-silicon tracker, significant upgrades of the calorimeter and muon systems, as well as improved triggers and data acquisition. ATLAS is also examining potential benefits of ...

  9. Evaluation of ATLAS 100% DVI Line Break Using TRACE Code

    International Nuclear Information System (INIS)

    Huh, Byung Gil; Bang, Young Seok; Cheong, Ae Ju; Woo, Sweng Woong

    2011-01-01

    ATLAS (Advanced Thermal-Hydraulic Test Loop for Accident Simulation) is an integral effect test facility in KAERI. It had installed completely to simulate the accident for the OPR1000 and the APR1400 in 2005. After then, several tests for LBLOCA, DVI line break have been performed successfully to resolve the safety issues of the APR1400. Especially, a DVI line break is considered as another spectrum among the SBLOCAs in APR1400 because the DVI line is directly connected to the reactor vessel and the thermal hydraulic behaviors are expected to be different from those for the cold leg injection. However, there are not enough experimental data for the DVI line break. Therefore, integral effect data for the DVI line break of ATLAS is very useful and available for an improvement and validation of safety codes. For the DVI line break in ATLAS, several analyses using MARS and RELAP codes were performed in the ATLAS DSP (Domestic Standard Problem) meetings. However, TRACE code has still not used to simulate a DVI line break. TRACE code has developed as the unified code for the reactor thermal hydraulic analyses in USNRC. In this study, the 100% DVI line break in ATLAS was evaluated by TRACE code. The objectives of this study are to identify the prediction capability of TRACE code for the major thermal hydraulic phenomena of a DVI line break in ATLAS

  10. ATLAS computing on CSCS HPC

    CERN Document Server

    Hostettler, Michael Artur; The ATLAS collaboration; Haug, Sigve; Walker, Rodney; Weber, Michele

    2015-01-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, some GPU acceleration of the Geant4 detector simulations has been implemented to justify the allocation request for this machine.

  11. ATLAS computing on CSCS HPC

    CERN Document Server

    Filipcic, Andrej; The ATLAS collaboration; Weber, Michele; Walker, Rodney; Hostettler, Michael Artur

    2015-01-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, is in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment has been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Further, some GPU acceleration of the Geant4 detector simulations were implemented to justify the allocation request for this machine.

  12. ATLAS diamond Beam Condition Monitor

    CERN Document Server

    Gorišek, A; Dolenc, I; Frais-Kölbl, H; Griesmayer, E; Kagan, H; Korpar, S; Kramberger, G; Mandic, I; Meyer, M; Mikuz, M; Pernegger, H; Smith, S; Trischuk, W; Weilhammer, P; Zavrtanik, M

    2007-01-01

    The ATLAS experiment has chosen to use diamond for its Beam Condition Monitor (BCM) given its radiation hardness, low capacitance and short charge collection time. In addition, due to low leakage current diamonds do not require cooling. The ATLAS Beam Condition Monitoring system is based on single beam bunch crossing measurements rather than integrating the accumulated particle flux. Its fast electronics will allow separation of LHC collisions from background events such as beam gas interactions or beam accidents. There will be two stations placed symmetrically about the interaction point along the beam axis at . Timing of signals from the two stations will provide almost ideal separation of beam–beam interactions and background events. The ATLAS BCM module consists of diamond pad detectors of area and thickness coupled to a two-stage RF current amplifier. The production of the final detector modules is almost done. A S/N ratio of 10:1 has been achieved with minimum ionizing particles (MIPs) in the test bea...

  13. Two ATLAS trackers become one

    CERN Multimedia

    2006-01-01

    The ATLAS inner detector barrel comes one step closer to completion as the semiconductor tracker is merged with the transition radiation tracker. ATLAS collaborators prepare for the insertion of the semiconductor tracker (SCT, behind) into the transition radiation tracker (TRT, in front). Some had hoped it would fall on Valentine's Day. But despite the slight delay, Friday 17 February was lovingly embraced as 'Conception Day,' when dozens of physicists and engineers from the international collaboration gathered to witness the insertion of the ATLAS semiconductor tracker into the transition radiation tracker, a major milestone in the assembly of the experiment's inner detector. With just millimeters of room for error, the cylindrical trackers were slid into each other as inner detector integration coordinator Heinz Pernegger issued commands and scientists held out flashlights, lay on their backs and stood on ladders to take careful measurements. Each tracker is the result of about 10 years of international ...

  14. The new ATLAS Fast Calorimeter Simulation

    CERN Document Server

    Jacka, Petr; The ATLAS collaboration

    2018-01-01

    With the huge amount of data collected with ATLAS, there is a need to produce a large number of simulated events. These productions are very CPU and time consuming when using the full GEANT4 simulation. FastCaloSim is a program to quickly simulate the ATLAS calorimeter response, based on a parameterization of the GEANT4 energy deposits of several kinds of particles in a grid of energy and eta. A new version of FastCaloSim is under development and its integration into the ATLAS simulation infrastructure is ongoing. The use of machine learning techniques improves the performance and decreases the memory usage. Dedicated parameterizations for the forward calorimeters are being studied. First results of the new FastCaloSim show substantial improvements of the description of energy and shower shape variables, including the variables for jet substructure.

  15. Multiple brain atlas database and atlas-based neuroimaging system.

    Science.gov (United States)

    Nowinski, W L; Fang, A; Nguyen, B T; Raphel, J K; Jagannathan, L; Raghavan, R; Bryan, R N; Miller, G A

    1997-01-01

    For the purpose of developing multiple, complementary, fully labeled electronic brain atlases and an atlas-based neuroimaging system for analysis, quantification, and real-time manipulation of cerebral structures in two and three dimensions, we have digitized, enhanced, segmented, and labeled the following print brain atlases: Co-Planar Stereotaxic Atlas of the Human Brain by Talairach and Tournoux, Atlas for Stereotaxy of the Human Brain by Schaltenbrand and Wahren, Referentially Oriented Cerebral MRI Anatomy by Talairach and Tournoux, and Atlas of the Cerebral Sulci by Ono, Kubik, and Abernathey. Three-dimensional extensions of these atlases have been developed as well. All two- and three-dimensional atlases are mutually preregistered and may be interactively registered with an actual patient's data. An atlas-based neuroimaging system has been developed that provides support for reformatting, registration, visualization, navigation, image processing, and quantification of clinical data. The anatomical index contains about 1,000 structures and over 400 sulcal patterns. Several new applications of the brain atlas database also have been developed, supported by various technologies such as virtual reality, the Internet, and electronic publishing. Fusion of information from multiple atlases assists the user in comprehensively understanding brain structures and identifying and quantifying anatomical regions in clinical data. The multiple brain atlas database and atlas-based neuroimaging system have substantial potential impact in stereotactic neurosurgery and radiotherapy by assisting in visualization and real-time manipulation in three dimensions of anatomical structures, in quantitative neuroradiology by allowing interactive analysis of clinical data, in three-dimensional neuroeducation, and in brain function studies.

  16. The ATLAS tracker strip detector for HL-LHC

    CERN Document Server

    Cormier, Kyle James Read; The ATLAS collaboration

    2016-01-01

    As part of the ATLAS upgrades for the High Luminsotiy LHC (HL-LHC) the current ATLAS Inner Detector (ID) will be replaced by a new Inner Tracker (ITk). The ITk will consist of two main components: semi-conductor pixels at the innermost radii, and silicon strips covering larger radii out as far as the ATLAS solenoid magnet including the volume currently occupied by the ATLAS Transition Radiation Tracker (TRT). The primary challenges faced by the ITk are the higher planned read out rate of ATLAS, the high density of charged particles in HL-LHC conditions for which tracks need to be resolved, and the corresponding high radiation doses that the detector and electronics will receive. The ITk strips community is currently working on designing and testing all aspects of the sensors, readout, mechanics, cooling and integration to meet these goals and a Technical Design Report is being prepared. This talk is an overview of the strip detector component of the ITk, highlighting the current status and the road ahead.

  17. The ATLAS tracker strip detector for HL-LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00512833; The ATLAS collaboration

    2017-01-01

    As part of the ATLAS upgrades for the High Luminsotiy LHC (HL-LHC) the current ATLAS Inner Detector (ID) will be replaced by a new Inner Tracker (ITk). The ITk will consist of two main components: semi-conductor pixels at the innermost radii, and silicon strips covering larger radii out as far as the ATLAS solenoid magnet including the volume currently occupied by the ATLAS Transition Radiation Tracker (TRT). The primary challenges faced by the ITk are the higher planned read out rate of ATLAS, the high density of charged particles in HL-LHC conditions for which tracks need to be resolved, and the corresponding high radiation doses that the detector and electronics will receive. The ITk strips community is currently working on designing and testing all aspects of the sensors, readout, mechanics, cooling and integration to meet these goals and a Technical Design Report is being prepared. This talk is an overview of the strip detector component of the ITk, highlighting the current status and the road ahead.

  18. Automating ATLAS Computing Operations using the Site Status Board

    CERN Document Server

    Andreeva, J; The ATLAS collaboration; Campana, S; Di Girolamo, A; Espinal Curull, X; Gayazov, S; Magradze, E; Nowotka, MM; Rinaldi, L; Saiz, P; Schovancova, J; Stewart, GA; Wright, M

    2012-01-01

    The automation of operations is essential to reduce manpower costs and improve the reliability of the system. The Site Status Board (SSB) is a framework which allows Virtual Organizations to monitor their computing activities at distributed sites and to evaluate site performance. The ATLAS experiment intensively uses SSB for the distributed computing shifts, for estimating data processing and data transfer efficiencies at a particular site, and for implementing automatic exclusion of sites from computing activities, in case of potential problems. ATLAS SSB provides a real-time aggregated monitoring view and keeps the history of the monitoring metrics. Based on this history, usability of a site from the perspective of ATLAS is calculated. The presentation will describe how SSB is integrated in the ATLAS operations and computing infrastructure and will cover implementation details of the ATLAS SSB sensors and alarm system, based on the information in SSB. It will demonstrate the positive impact of the use of SS...

  19. Design and Implementation of the ATLAS Detector Control System

    CERN Document Server

    Boterenbrood, H; Cook, J; Filimonov, V; Hallgren, B I; Heubers, W P J; Khomoutnikov, V; Ryabov, Yu; Varela, F

    2004-01-01

    The overall dimensions of the ATLAS experiment and its harsh environment, due to radiation and magnetic field, represent new challenges for the implementation of the Detector Control System. It supervises all hardware of the ATLAS detector, monitors the infrastructure of the experiment, and provides information exchange with the LHC accelerator. The system must allow for the operation of the different ATLAS sub-detectors in stand-alone mode, as required for calibration and debugging, as well as the coherent and integrated operation of all sub-detectors for physics data taking. For this reason, the Detector Control System is logically arranged to map the hierarchical organization of the ATLAS detector. Special requirements are placed onto the ATLAS Detector Control System because of the large number of distributed I/O channels and of the inaccessibility of the equipment during operation. Standardization is a crucial issue for the design and implementation of the control system because of the large variety of e...

  20. The Next Generation ARC Middleware and ATLAS Computing Model

    International Nuclear Information System (INIS)

    Filipčič, Andrej; Cameron, David; Konstantinov, Aleksandr; Karpenko, Dmytro; Smirnova, Oxana

    2012-01-01

    The distributed NDGF Tier-1 and associated NorduGrid clusters are well integrated into the ATLAS computing environment but follow a slightly different paradigm than other ATLAS resources. The current paradigm does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC middleware with its several new technologies provides new possibilities in development of the ATLAS computing model, such as pilot jobs with pre-cached input files, automatic job migration between the sites, integration of remote sites without connected storage elements, and automatic brokering for jobs with non-standard resource requirements. ARC's data transfer model provides an automatic way for the computing sites to participate in ATLAS’ global task management system without requiring centralised brokering or data transfer services. The powerful API combined with Python and Java bindings can easily be used to build new services for job control and data transfer. Integration of the ARC core into the EMI middleware provides a natural way to implement the new services using the ARC components

  1. Pre-test analysis of a LBLOCA using the design data of the ATLAS facility, a reduced-height integral effect test loop for PWRs

    International Nuclear Information System (INIS)

    Hyun-Sik Park; Ki-Yong Choi; Dong-Jin Euh; Tae-Soon Kwon; Won-Pil Baek

    2005-01-01

    Full text of publication follows: The simulation capability of the KAERI integral effect test facility, ATLAS (Advanced Thermalhydraulic Test Loop for Accident Simulation), has been assessed for a large-break loss-of-coolant accident (LBLOCA) transient. The ATLAS facility is a 1/2 height-scaled, 1/144 area-scaled (1/288 in volume scale), and full-pressure test loop based on the design features of the APR1400, an evolutionary pressurized water reactor that has been developed by Korean industry. The APR1400 has four mechanically separated hydraulic trains for the emergency core cooling system (ECCS) with direct vessel injection (DVI). The APR1400 design features have brought about several new safety issues related to the LBLOCA including the steam-water interaction, ECC bypass, and boiling in the reactor vessel downcomer. The ATLAS facility will be used to investigate the multiple responses between the systems or between the components during various anticipated transients. The ATLAS facility has been designed according to a scaling method that is mainly based on the model suggested by Ishii and Kataoka. The ATLAS facility is being evaluated against the prototype plant APR1400 with the same control logics and accident scenarios using the best-estimated code, MARS. This paper briefly introduces the basic design features of the ATLAS facility and presents the results of pre-test analysis for a postulated LBLOCA of a cold leg. The LBLOCA analyses has been conducted to assess the validity of the applied scaling law and the similarity between the ATLAS facility and the APR1400. As the core simulator of the ATLAS facility has the 10% capability of the scaled full power, the blowdown phase can not be simulated, and the starting point of the accident scenario is around the end of blowdown. So it is an important problem to find the correct initial conditions. For the analyzed LBLOCA scenario, the ATLAS facility showed very similar thermal-hydraulic characteristics to the APR

  2. Agenesis of the posterior arch of the atlas

    Directory of Open Access Journals (Sweden)

    Torriani Martin

    2002-01-01

    Full Text Available PURPOSE: To illustrate the radiological findings and review the current literature concerning a rare congenital abnormality of the posterior arch of the atlas. CASE REPORT: An adult female without neurological symptoms presented with an absent posterior arch of the atlas, examined with plain films and helical computerized tomography. Complete agenesis of the posterior arch of the atlas is a rare entity that can be easily identified by means of plain films. Although it is generally asymptomatic, atlantoaxial instability and neurological deficits may occur because of structural instability. Computerized tomography provides a means of assessing the extent of this abnormality and can help evaluate the integrity of neural structures. Although considered to be rare entities, defects of the posterior arch of the atlas may be discovered as incidental asymptomatic findings in routine cervical radiographs. Familiarity with this abnormality may aid medical professionals in the correct management of these cases.

  3. Readout and trigger for the AFP detector at the ATLAS experiment at LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00097773; The ATLAS collaboration; Kocian, Martin; Lopez Paz, Ivan; Avoni, Giulio

    2017-01-01

    The ATLAS Forward Proton is a new detector system in ATLAS that allows study of events with protons scattered at very small angles. The final design assumes four stations at distances of 205 and 217 m from the ATLAS interaction point on both sides of the detector exploiting the Roman Pot technology. In 2016 two stations in one arm were installed; installation of the other two is planned for 2017. This article describes details of the installed hardware, firmware and software leading to the full integration with the ATLAS central trigger and data acquisition systems.

  4. Beyond Standard Model searches in B decays with ATLAS

    CERN Document Server

    Turchikhin, Semen; The ATLAS collaboration

    2018-01-01

    The proceeding contribution presents recent results of the ATLAS experiment at the LHC on heavy flavour measurements sensitive to possible contributions of the new physics. Two measurements are overviewed: the angular analysis of $B^0\\to\\mu^+\\mu^- K^{*0}$ decay and measurement of relative width difference of the $B^0$-$\\bar{B}^0$ system. The first one uses a data sample with an integrated luminosity of 20.3 fb$^{-1}$ collected by ATLAS at a centre of mass energy $\\sqrt{s} = 8$ TeV, and the second one benefits from the full ATLAS Run-1 dataset with additional 4.9 fb$^{-1}$ collected at $\\sqrt{s} = 7$ TeV.

  5. The ATLAS multi-user upgrade and potential applications

    Energy Technology Data Exchange (ETDEWEB)

    Mustapha, B.; Nolen, J. A.; Savard, G.; Ostroumov, P. N.

    2017-12-01

    With the recent integration of the CARIBU-EBIS charge breeder into the ATLAS accelerator system to provide for more pure and efficient charge breeding of radioactive beams, a multi-user upgrade of the ATLAS facility is being proposed to serve multiple users simultaneously. ATLAS was the first superconducting ion linac in the world and is the US DOE low-energy Nuclear Physics National User Facility. The proposed upgrade will take advantage of the continuous-wave nature of ATLAS and the pulsed nature of the EBIS charge breeder in order to simultaneously accelerate two beams with very close mass-to-charge ratios; one stable from the existing ECR ion source and one radioactive from the newly commissioned EBIS charge breeder. In addition to enhancing the nuclear physics program, beam extraction at different points along the linac will open up the opportunity for other potential applications; for instance, material irradiation studies at ~ 1 MeV/u and isotope production at ~ 6 MeV/u or at the full ATLAS energy of ~ 15 MeV/u. The concept and proposed implementation of the ATLAS multi-user upgrade will be presented. Future plans to enhance the flexibility of this upgrade will also be presented.

  6. Atlas of Skeletal SPECT/CT Clinical Images

    International Nuclear Information System (INIS)

    2016-01-01

    The atlas focuses specifically on single photon emission computed tomography/computed tomography (SPECT/CT) in musculoskeletal imaging, and thus illustrates the inherent advantages of the combination of the metabolic and anatomical component in a single procedure. In addition, the atlas provides information on the usefulness of several sets of specific indications. The publication, which serves more as a training tool rather than a textbook, will help to further integrate the SPECT and CT experience in clinical practice by presenting a series of typical cases with many different patterns of SPECT/CT seen in bone scintigraphy

  7. Class Generation for Numerical Wind Atlases

    DEFF Research Database (Denmark)

    Cutler, N.J.; Jørgensen, B.H.; Ersbøll, Bjarne Kjær

    2006-01-01

    A new optimised clustering method is presented for generating wind classes for mesoscale modelling to produce numerical wind atlases. It is compared with the existing method of dividing the data in 12 to 16 sectors, 3 to 7 wind-speed bins and dividing again according to the stability...... of the atmosphere. Wind atlases are typically produced using many years of on-site wind observations at many locations. Numerical wind atlases are the result of mesoscale model integrations based on synoptic scale wind climates and can be produced in a number of hours of computation. 40 years of twice daily NCEP...... adapting to the local topography. The purpose of forming classes is to minimise the computational time for the mesoscale model while still representing the synoptic climate features. Only tried briefly in the past, clustering has traits that can be used to improve the existing class generation method...

  8. ATLAS program for advanced thermal-hydraulic safety research

    International Nuclear Information System (INIS)

    Song, Chul-Hwa; Choi, Ki-Yong; Kang, Kyoung-Ho

    2015-01-01

    Highlights: • Major achievements of the ATLAS program are highlighted in conjunction with both developing advanced light water reactor technologies and enhancing the nuclear safety. • The ATLAS data was shown to be useful for the development and licensing of new reactors and safety analysis codes, and also for nuclear safety enhancement through domestic and international cooperative programs. • A future plan for the ATLAS testing is introduced, covering recently emerging safety issues and some generic thermal-hydraulic concerns. - Abstract: This paper highlights the major achievements of the ATLAS program, which is an integral effect test program for both developing advanced light water reactor technologies and contributing to enhancing nuclear safety. The ATLAS program is closely related with the development of the APR1400 and APR"+ reactors, and the SPACE code, which is a best-estimate system-scale code for a safety analysis of nuclear reactors. The multiple roles of ATLAS testing are emphasized in very close conjunction with the development, licensing, and commercial deployment of these reactors and their safety analysis codes. The role of ATLAS for nuclear safety enhancement is also introduced by taking some examples of its contributions to voluntarily lead to multi-body cooperative programs such as domestic and international standard problems. Finally, a future plan for the utilization of ATLAS testing is introduced, which aims at tackling recently emerging safety issues such as a prolonged station blackout accident and medium-size break LOCA, and some generic thermal-hydraulic concerns as to how to figure out multi-dimensional phenomena and the scaling issue.

  9. A Customizable MR Brain Imaging Atlas of Structure and Function for Decision Support.

    Science.gov (United States)

    U., Sinha; S., El-Saden; G., Duckwiler; L., Thompson; S., Ardekani; H., Kangarloo

    2003-01-01

    We present a MR brain atlas for structure and function (diffusion weighted images). The atlas is customizable for contrast and orientation to match the current patient images. In addition, the atlas also provides normative values of MR parameters. The atlas is designed on informatics principles to provide context sensitive decision support at the time of primary image interpretation. Additional support for diagnostic interpretation is provided by a list of expert created most relevant ‘Image Finding Descriptors’ that will serve as cues to the user. The architecture of the atlas module is integrated into the image workflow of a radiology department to provide support at the time of primary diagnosis. PMID:14728244

  10. New results on Higgs boson physics from the ATLAS experiment

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    The ATLAS Collaboration has recently released several results that shed more light on the nature of the Higgs boson particle and the BEH mechanism. A selection of these Higgs boson results, including some results based on up to 80 fb-1of integrated luminosity collected at a centre-of-mass energy of 13 TeV with the ATLAS experiment at the LHC, will be presented.

  11. Atlas event production on the EGEE infrastructure

    CERN Document Server

    Espinal, X; Perini, L; Rod, W

    2007-01-01

    ATLAS is one of the four LHC (Large Hadron Collider) experiments at CERN, is devoted to study proton-proton and ion-ion collisions at 14TeV. ATLAS collaboration is composed of about 2000 scientists spread around the world. The activity of the experiment requirements for next year is of about 300TB of storage and a CPU power of about 13 Mski2sk, and is relying on GRID philosophy and EGEE infrastructure. Simulated events are distributed over EGEE by the Atlas production system. Data has to be processed and must be accessible by a huge number of scientists for analysis. The throughput of data for Atlas experiment is expected to be of 320 MB/s with an integrated amount of data per year of ~10Pb. The processing and storage need a distributed share of resources, spread worldwide and interconnected with GRID technologies as the requirements are so demanding for the LHC. In that sense event production is the way to produce, process and store data for analysis before the experiment startup, and is performed in a distr...

  12. ATLAS Transition Region Upgrade at Phase-1

    CERN Document Server

    Song, H; The ATLAS collaboration

    2014-01-01

    This report presents the L1 Muon trigger transition region (1.0<|ƞ|<1.3) upgrade of ATLAS Detector at phase-1. The high fake trigger rate in the Endcap region 1.0<|ƞ|<2.4 would become a serious problem for the ATLAS L1 Muon trigger system at high luminosity. For the region 1.3<|ƞ|<2.4, covered by the Small Wheel, ATLAS is enhancing the present muon trigger by adding local fake rejection and track angle measurement capabilities. To reduce the rate in the remaining ƞ interval it has been proposed a similar enhancement by adding at the edge of the inner barrel a structure of 3-layers RPCs of a new generation. These RPCs will be based on a thinner gas gap and electrodes with respect to the ATLAS standards, a new high performance Front End, integrating fast TDC capabilities, and a new low profile and light mechanical structure allowing the installation in the tiny space available.This design effectively suppresses fake triggers by making the coincidence with both end-cap and interaction point...

  13. Validation Tools for ATLAS Muon Spectrometer Commissioning

    International Nuclear Information System (INIS)

    Benekos, N.Chr.; Dedes, G.; Laporte, J.F.; Nicolaidou, R.; Ouraou, A.

    2008-01-01

    The ATLAS Muon Spectrometer (MS), currently being installed at CERN, is designed to measure final state muons of 14 TeV proton-proton interactions at the Large Hadron Collider (LHC) with a good momentum resolution of 2-3% at 10-100 GeV/c and 10% at 1 TeV, taking into account the high level background enviroment, the inhomogeneous magnetic field, and the large size of the apparatus (24 m diameter by 44 m length). The MS layout of the ATLAS detector is made of a large toroidal magnet, arrays of high-pressure drift tubes for precise tracking and dedicated fast detectors for the first-level trigger, and is organized in eight Large and eight Small sectors. All the detectors of the barrel toroid have been installed and the commissioning has started with cosmic rays. In order to validate the MS performance using cosmic events, a Muon Commissioning Validation package has been developed and its results are presented in this paper. Integration with the rest of the ATLAS sub-detectors is now being done in the ATLAS cavern

  14. ATLAS database application enhancements using Oracle 11g

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Blaszczyk, M; Sorokoletov, R

    2012-01-01

    The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, condition data replication to remote sites. Oracle Relational Database Management System (RDBMS) has been addressing the ATLAS database requirements to a great extent for many years. Ten database clusters are currently deployed for the needs of the different applications, divided in production, integration and standby databases. The data volume, complexity and demands from the users are increasing steadily with time. Nowadays more than 20 TB of data are stored in the ATLAS production Oracle databases at CERN (not including the index overhead), but the most impressive number is the hosted 260 database schemas (for the most common case each schema is related to a dedicated client application with its own requirements). At the beginning of 2012 all ATLAS databases at CERN have...

  15. Prospects for SUSY discovery based on inclusive searches with the ATLAS detector

    International Nuclear Information System (INIS)

    Ventura, Andrea

    2009-01-01

    The search for Supersymmetry (SUSY) among the possible scenarios of new physics is one of the most relevant goals of the ATLAS experiment running at CERN's Large Hadron Collider. In the present work the expected prospects for discovering SUSY with the ATLAS detector are reviewed, in particular for the first fb -1 of collected integrated luminosity. All studies and results reported here are based on inclusive search analyses realized with Monte Carlo signal and background data simulated through the ATLAS apparatus.

  16. ATLAS Detector Upgrade Prospects

    International Nuclear Information System (INIS)

    Dobre, M

    2017-01-01

    After the successful operation at the centre-of-mass energies of 7 and 8 TeV in 2010-2012, the LHC was ramped up and successfully took data at the centre-of-mass energies of 13 TeV in 2015 and 2016. Meanwhile, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, which will deliver of the order of five times the LHC nominal instantaneous luminosity along with luminosity levelling. The ultimate goal is to extend the dataset from about few hundred fb −1 expected for LHC running by the end of 2018 to 3000 fb −1 by around 2035 for ATLAS and CMS. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for a new all-silicon tracker, significant upgrades of the calorimeter and muon systems, as well as improved triggers and data acquisition. ATLAS is also examining potential benefits of extensions to larger pseudorapidity, particularly in tracking and muon systems. This report summarizes various improvements to the ATLAS detector required to cope with the anticipated evolution of the LHC luminosity during this decade and the next. A brief overview is also given on physics prospects with a pp centre-of-mass energy of 14 TeV. (paper)

  17. PanDA: distributed production and distributed analysis system for ATLAS

    International Nuclear Information System (INIS)

    Maeno, T

    2008-01-01

    A new distributed software system was developed in the fall of 2005 for the ATLAS experiment at the LHC. This system, called PANDA, provides an integrated service architecture with late binding of jobs, maximal automation through layered services, tight binding with ATLAS Distributed Data Management system [1], advanced error discovery and recovery procedures, and other features. In this talk, we will describe the PANDA software system. Special emphasis will be placed on the evolution of PANDA based on one and half year of real experience in carrying out Computer System Commissioning data production [2] for ATLAS. The architecture of PANDA is well suited for the computing needs of the ATLAS experiment, which is expected to be one of the first HEP experiments to operate at the petabyte scale

  18. Bd/s -> mu+ mu- in ATLAS

    CERN Document Server

    Guenther, Jaroslav; The ATLAS collaboration

    2016-01-01

    The ATLAS Experiment has conducted a search for the rare decays of Bs and Bd into mu+mu-. 25 fb−1 of integrated luminosity of proton-proton collisions collected during LHC Run 1 were studied to provide new results presented in this talk. An upper limit is set on the branching ratio BR(Bd to mu+mu-) < 4.2×10−10 at 95% confidence level. For Bs, ATLAS measurement yields the branching ratio BR(Bs to mu+mu-)=(0.9+1.1−0.8)×10−9. The result is consistent with the Standard Model expectation and other available measurements.

  19. Progress in ATLAS central solenoid magnet

    CERN Document Server

    Yamamoto, A; Makida, Y; Tanaka, K; Haruyama, T; Yamaoka, H; Kondo, T; Mizumaki, S; Mine, S; Wada, K; Meguro, S; Sotoki, T; Kikuchi, K; ten Kate, H H J

    2000-01-01

    The ATLAS central solenoid magnet is being developed to provide a magnetic field of 2 Tesla in the central tracking volume of the ATLAS detector under construction at the CERN/LHC project. The solenoid coil design features high-strength aluminum stabilized superconductor to make the coil thinnest while maintaining its stability and the pure-aluminum strip technique for quench protection and safety. The solenoid coil is installed in a common cryostat with the LAr calorimeter in order to minimize the cryostat wall. A transparency of 0.66 radiation length is achieved with these integrated efforts. The progress in the solenoid coil fabrication is reported. (8 refs).

  20. MARS input data for steady-state calculation of ATLAS

    International Nuclear Information System (INIS)

    Park, Hyun Sik; Euh, D. J.; Choi, K. Y.; Kwon, T. S.; Jeong, J. J.; Baek, W. P.

    2004-12-01

    An integral effect test loop for Pressurized Water Reactors (PWRs), the ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), is under construction by Thermal-Hydraulics Safety Research Division in Korea Atomic Energy Research Institute (KAERI). This report includes calculation sheets of the input for the best-estimate system analysis code, the MARS code, based on the ongoing design features of ATLAS. The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400. The contents of this report are divided into three parts: (1) core and reactor vessel, (2) steam generator and steam line, and (3) primary piping, pressurizer and reactor coolant pump. The steady-state analysis for the ATLAS facility will be performed based on these calculation sheets, and its results will be applied to the detailed design of ATLAS. Additionally, the calculation results will contribute to getting optimum test conditions and preliminary operational test conditions for the steady-state and transient experiments

  1. Grid production with the ATLAS Event Service

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration

    2018-01-01

    ATLAS has developed and previously presented a new computing architecture, the Event Service, that allows real time delivery of fine grained workloads which process dispatched events (or event ranges) and immediately streams outputs. The principal aim was to profit from opportunistic resources such as commercial cloud, supercomputing, and volunteer computing, and otherwise unused cycles on clusters and grids. During the development and deployment phase, its utility also on the grid and conventional clusters for the exploitation of otherwise unused cycles became apparent. Here we describe our experience commissioning the Event Service on the grid in the ATLAS production system. We study the performance compared with standard simulation production. We describe the integration with the ATLAS data management system to ensure scalability and compatibility with object stores. Finally, we outline the remaining steps towards a fully commissioned system.

  2. Large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    Alexandrov; Kotov, V.; Mineev, M.; Roumiantsev, V.; Wolters, H.; Amorim, A.; Pedro, L.; Ribeiro, A.; Badescu, E.; Caprini, M.; Burckhart-Chromek, D.; Dobson, M.; Jones, R.; Kazarov, A.; Kolos, S.; Liko, D.; Lucio, L.; Mapelli, L.; Nassiakou, M.; Schweiger, D.; Soloviev, I.; Hart, R.; Ryabov, Y.; Moneta, L.

    2001-01-01

    One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system. It encompasses the functionality needed to configure, control and monitor the DAQ. Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal. Regular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system. Feedback is received and returned into the development process. Studies of the system behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size. Large scale and performance test of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software. Of particular interest were the run control state transitions in various configurations of the run control hierarchy. For the purpose of the tests, the software from other Trigger/DAQ sub-systems has been emulated. The author presents a brief overview of the online system structure, its components and the large scale integration tests and their results

  3. Radiation Damage Monitoring in the ATLAS Pixel Detector

    CERN Document Server

    Seidel, S

    2013-01-01

    We describe the implementation of radiation damage monitoring using measurement of leakage current in the ATLAS silicon pixel sensors. The dependence of the leakage current upon the integrated luminosity is presented. The measurement of the radiation damage corresponding to integrated luminosity 5.6 fb$^{-1}$ is presented along with a comparison to the theoretical model.

  4. ATLAS upgrades for the next decades

    CERN Document Server

    Hopkins, Walter; The ATLAS collaboration

    2014-01-01

    After the successful LHC operation at the center-of-mass energies of 7 and 8 TeV in 2010-2012, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity leveling. The final goal is to extend the dataset from about few hundred \\ifb\\ expected for LHC running to 3000 fb$^{-1}$ by around 2035 for ATLAS and CMS. In parallel, the experiments need to be keep lockstep with the accelerator to accommodate running beyond the nominal luminosity this decade. Current planning in ATLAS envisions significant upgrades to the detector during the consolidation of the LHC to reach full LHC energy and further upgrades. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for...

  5. A new sub-detector for ATLAS

    CERN Multimedia

    Marco Bruschi

    Since last August, the ATLAS detector family has been joined by a new little member named LUCID, from the acronym "LUminosity Cerenkov Integrating Detector". This may well surprise you if you are already aware that LUCID construction started only in February after its approval by an ATLAS-management mandated review committee. The rapid progress from approval to installation is the result of the close collaboration between groups from Alberta (Canada), INFN Bologna (Italy), Lund (Sweden) and CERN. LUCID is primarily intended to measure the luminosity delivered by the LHC to ATLAS with a systematic uncertainty in the range of a few percent. To achieve such a precision and still meet the demanding installation schedule, the LUCID developers prized simplicity and robustness above all. One of the LUCID vessels while under construction. One can see the aluminum Cerenkov tubes and the photomultiplier mount (plugged into the upper flange). The two fully assembled LUCID vessels seen from the front end elect...

  6. The Third ATLAS ROD Workshop

    CERN Multimedia

    Poggioli, L.

    A new-style Workshop After two successful ATLAS ROD Workshops dedicated to the ROD hardware and held at the Geneva University in 1998 and in 2000, a new style Workshop took place at LAPP in Annecy on November 14-15, 2002. This time the Workshop was fully dedicated to the ROD-TDAQ integration and software in view of the near future integration activities of the final RODs for the detector assembly and commissioning. More precisely, the aim of this workshop was to get from the sub-detectors the parameters needed for T-DAQ, as well as status and plans from ROD builders. On the other hand, what was decided and assumed had to be stated (like EB decisions and URDs), and also support plans. The Workshop gathered about 70 participants from all ATLAS sub-detectors and the T-DAQ community. The quite dense agenda allowed nevertheless for many lively discussions, and for a dinner in the old town of Annecy. The Sessions The Workshop was organized in five main sessions: Assumptions and recommendations Sub-de...

  7. TransAtlasDB: an integrated database connecting expression data, metadata and variants

    Science.gov (United States)

    Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J

    2018-01-01

    Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361

  8. Pre-Test Analysis of Major Scenarios for ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Euh, Dong-Jin; Choi, Ki-Yong; Park, Hyun-Sik; Kwon, Tae-Soon

    2007-02-15

    A thermal-hydraulic integral effect test facility, ATLAS was constructed at the Korea Atomic Energy Research Institute (KAERI). The ATLAS is a 1/2 reduced height and 1/288 volume scaled test facility based on the design features of the APR1400. The simulation capability of the ATLAS for major design basis accidents (DBAs), including a large-break loss-of-coolant (LBLOCA), DVI line break and main steam line break (MSLB) accidents, is evaluated by the best-estimate system code, MARS, with the same control logics, transient scenarios and nodalization scheme. The validity of the applied scaling law and the thermal-hydraulic similarity between the ATLAS and the APR1400 for the major design basis accidents are assessed. It is confirmed that the ATLAS has a capability of maintaining an overall similarity with the reference plant APR1400 for the major design basis accidents considered in the present study. However, depending on the accident scenarios, there are some inconsistencies in certain thermal hydraulic parameters. It is found that the inconsistencies are mainly due to the reduced power effect and the increased stored energy in the structure. The present similarity analysis was successful in obtaining a greater insight into the unique design features of the ATLAS and would be used for developing the optimized experimental procedures and control logics.

  9. Pre-Test Analysis of Major Scenarios for ATLAS

    International Nuclear Information System (INIS)

    Euh, Dong-Jin; Choi, Ki-Yong; Park, Hyun-Sik; Kwon, Tae-Soon

    2007-02-01

    A thermal-hydraulic integral effect test facility, ATLAS was constructed at the Korea Atomic Energy Research Institute (KAERI). The ATLAS is a 1/2 reduced height and 1/288 volume scaled test facility based on the design features of the APR1400. The simulation capability of the ATLAS for major design basis accidents (DBAs), including a large-break loss-of-coolant (LBLOCA), DVI line break and main steam line break (MSLB) accidents, is evaluated by the best-estimate system code, MARS, with the same control logics, transient scenarios and nodalization scheme. The validity of the applied scaling law and the thermal-hydraulic similarity between the ATLAS and the APR1400 for the major design basis accidents are assessed. It is confirmed that the ATLAS has a capability of maintaining an overall similarity with the reference plant APR1400 for the major design basis accidents considered in the present study. However, depending on the accident scenarios, there are some inconsistencies in certain thermal hydraulic parameters. It is found that the inconsistencies are mainly due to the reduced power effect and the increased stored energy in the structure. The present similarity analysis was successful in obtaining a greater insight into the unique design features of the ATLAS and would be used for developing the optimized experimental procedures and control logics

  10. Fusion set selection with surrogate metric in multi-atlas based image segmentation

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2016-01-01

    Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation. (paper)

  11. A dynamic system for ATLAS software installation on OSG grid sites

    International Nuclear Information System (INIS)

    Zhao, X; Maeno, T; Wenaus, T; Leuhring, F; Youssef, S; Brunelle, J; De Salvo, A; Thompson, A S

    2010-01-01

    A dynamic and reliable system for installing the ATLAS software releases on Grid sites is crucial to guarantee the timely and smooth start of ATLAS production and reduce its failure rate. In this paper, we discuss the issues encountered in the previous software installation system, and introduce the new approach, which is built upon the new development in the areas of the ATLAS workload management system (PanDA), and software package management system (pacman). It is also designed to integrate with the EGEE ATLAS software installation framework. In the new system, ATLAS software releases are packaged as pacball, a uniquely identifiable and reproducible self-installing data file. The distribution of pacballs to remote sites is managed by ATLAS data management system (DQ2) and PanDA server. The installation on remote sites is automatically triggered by the PanDA pilot jobs. The installation job payload connects to a central ATLAS software installation portal, making the information of installation status easily accessible across OSG and EGEE Grids. The issues encountered in running the new system in production, and our future plan for improvement, will also be discussed.

  12. ATLAS experience with HEP software at the Argonne leadership computing facility

    International Nuclear Information System (INIS)

    Uram, Thomas D; LeCompte, Thomas J; Benjamin, D

    2014-01-01

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  13. ATLAS Experience with HEP Software at the Argonne Leadership Computing Facility

    CERN Document Server

    LeCompte, T; The ATLAS collaboration; Benjamin, D

    2014-01-01

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  14. Readout and Trigger for the AFP Detector at the ATLAS Experiment

    CERN Document Server

    Kocian, Martin; The ATLAS collaboration

    2018-01-01

    AFP, the ATLAS Forward Proton consists of silicon detectors at 205 m and 217 m on each side of ATLAS. In 2016 two detectors in one side were installed. The FEI4 chips are read at 160 Mbps over the optical fibers. The DAQ system uses a FPGA board with Artix chip and a mezzanine card with RCE data processing module based on a Zynq chip with ARM processor running Linux. In this contribution we give an overview of the AFP detector with the commissioning steps taken to integrate with the ATLAS TDAQ. Furthermore first performance results are presented.

  15. A high resolution global wind atlas - improving estimation of world wind resources

    DEFF Research Database (Denmark)

    Badger, Jake; Ejsing Jørgensen, Hans

    2011-01-01

    to population centres, electrical transmission grids, terrain types, and protected land areas are important parts of the resource assessment downstream of the generation of wind climate statistics. Related to these issues of integration are the temporal characteristics and spatial correlation of the wind...... resources. These aspects will also be addressed by the Global Wind Atlas. The Global Wind Atlas, through a transparent methodology, will provide a unified, high resolution, and public domain dataset of wind energy resources for the whole world. The wind atlas data will be the most appropriate wind resource...

  16. Integration of ROOT notebook as an ATLAS analysis web-based tool in outreach and public data release projects

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00237353; The ATLAS collaboration

    2017-01-01

    Integration of the ROOT data analysis framework with the Jupyter Notebook technology presents the potential of enhancement and expansion of educational and training programs. It can be beneficial for university students in their early years, new PhD students and post-doctoral researchers, as well as for senior researchers and teachers who want to refresh their data analysis skills or to introduce a more friendly and yet very powerful open source tool in the classroom. Such tools have been already tested in several environments. A fully web-based integration of the tools and the Open Access Data repositories brings the possibility to go a step forward in the ATLAS quest of making use of several CERN projects in the field of the education and training, developing new computing solutions on the way.

  17. Thermal Testing and Model Correlation for Advanced Topographic Laser Altimeter Instrument (ATLAS)

    Science.gov (United States)

    Patel, Deepak

    2016-01-01

    The Advanced Topographic Laser Altimeter System (ATLAS) part of the Ice Cloud and Land Elevation Satellite 2 (ICESat-2) is an upcoming Earth Science mission focusing on the effects of climate change. The flight instrument passed all environmental testing at GSFC (Goddard Space Flight Center) and is now ready to be shipped to the spacecraft vendor for integration and testing. This topic covers the analysis leading up to the test setup for ATLAS thermal testing as well as model correlation to flight predictions. Test setup analysis section will include areas where ATLAS could not meet flight like conditions and what were the limitations. Model correlation section will walk through changes that had to be made to the thermal model in order to match test results. The correlated model will then be integrated with spacecraft model for on-orbit predictions.

  18. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; Berghaus, Frank; Brasolin, Franco; Cordeiro, Cristovao; Desmarais, Ron; Field, Laurence; Gable, Ian; Giordano, Domenico; Di Girolamo, Alessandro; Hover, John; Leblanc, Matthew Edgar; Love, Peter; Paterson, Michael; Sobie, Randall; Zaytsev, Alexandr

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This paper describes the overall evolution of cloud computing in ATLAS. The current status of the virtual machine (VM) management systems used for harnessing infrastructure as a service (IaaS) resources are discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for ma...

  19. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Berghaus, Frank; Love, Peter; Leblanc, Matthew Edgar; Di Girolamo, Alessandro; Paterson, Michael; Gable, Ian; Sobie, Randall; Field, Laurence

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This work will describe the overall evolution of cloud computing in ATLAS. The current status of the VM management systems used for harnessing IAAS resources will be discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for managing VM images across multiple clouds, ...

  20. Multilevel Workflow System in the ATLAS Experiment

    International Nuclear Information System (INIS)

    Borodin, M; De, K; Navarro, J Garcia; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2015-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs are executed across more than a hundred distributed computing sites by PanDA - the ATLAS job-level workload management system. On the outer level, the Database Engine for Tasks (DEfT) empowers production managers with templated workflow definitions. On the next level, the Job Execution and Definition Interface (JEDI) is integrated with PanDA to provide dynamic job definition tailored to the sites capabilities. We report on scaling up the production system to accommodate a growing number of requirements from main ATLAS areas: Trigger, Physics and Data Preparation. (paper)

  1. Dual deep modeling: multi-level modeling with dual potencies and its formalization in F-Logic.

    Science.gov (United States)

    Neumayr, Bernd; Schuetz, Christoph G; Jeusfeld, Manfred A; Schrefl, Michael

    2018-01-01

    An enterprise database contains a global, integrated, and consistent representation of a company's data. Multi-level modeling facilitates the definition and maintenance of such an integrated conceptual data model in a dynamic environment of changing data requirements of diverse applications. Multi-level models transcend the traditional separation of class and object with clabjects as the central modeling primitive, which allows for a more flexible and natural representation of many real-world use cases. In deep instantiation, the number of instantiation levels of a clabject or property is indicated by a single potency. Dual deep modeling (DDM) differentiates between source potency and target potency of a property or association and supports the flexible instantiation and refinement of the property by statements connecting clabjects at different modeling levels. DDM comes with multiple generalization of clabjects, subsetting/specialization of properties, and multi-level cardinality constraints. Examples are presented using a UML-style notation for DDM together with UML class and object diagrams for the representation of two-level user views derived from the multi-level model. Syntax and semantics of DDM are formalized and implemented in F-Logic, supporting the modeler with integrity checks and rich query facilities.

  2. High-performance scalable Information Service for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Hauser, R

    2012-01-01

    The ATLAS experiment is being operated by highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to access the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS TDAQ project. The IS provides high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about hundred gigabytes of information which is being constantly updated with the update interval varying from a second to few tens of seconds. IS ...

  3. Body of evidence: integrating Eduard Pernkopf's Atlas into a librarian-led medical humanities seminar.

    Science.gov (United States)

    Mages, Keith C; Lohr, Linda A

    2017-04-01

    Anatomical subjects depicted in Eduard Pernkopf's richly illustrated Topographische Anatomie des Menschen may be victims of the Nazi regime. Special collections librarians in the history of medicine can use this primary resource to initiate dialogs about ethics with medical students. Reported here is the authors' use of Pernkopf's Atlas in an interactive medical humanities seminar designed for third-year medical students. Topical articles, illustrations, and interviews introduced students to Pernkopf, his Atlas , and the surrounding controversies. We aimed to illustrate how this controversial historical publication can successfully foster student discussion and ethical reflection. Pernkopf's Atlas and our mix of contextual resources facilitated thoughtful discussions about history and ethics amongst the group. Anonymous course evaluations showed student interest in the subject matter, relevance to their studies, and appreciation of our special collection's space and contents.

  4. ATLAS presents award to a Russian manufacturer within an ISTC project

    CERN Multimedia

    2004-01-01

    On 28 January the Russian machine building plant Molniya was awarded a prize for best ATLAS suppliers, for excellence in the construction of 29 modules for the Hadronic End-Cap Calorimeter of ATLAS. An ATLAS supplier award ceremony was held on Wednesday 28th January. The award for the most exceptional contribution to construction of the future detector was presented to the Russian company Molniya, a former weapons manufacturer based near Moscow. The Molniya machine building plant constructed a total of 29 modules for the LAr Hadronic End-Cap Calorimeter (HEC) of ATLAS. Thirteen are series modules which have already been integrated into the four wheels of the detector. The remaining 16 are calibration modules, designed for the ATLAS beam tests. To manufacture the unique copper plates and module structures required, the company set up a dedicated production process and developed stringent quality control criteria. The task was completed on time, within budget and the completed modules surpassed required qua...

  5. The LUCID detector ATLAS luminosity monitor and its electronic system

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00378808; The ATLAS collaboration

    2016-01-01

    Starting from 2015 LHC is performing a new run, at higher center of mass energy (13 TeV) and with 25 ns bunch-spacing. The ATLAS luminosity monitor LUCID has been completely renewed, both on detector design and in the electronics, in order to cope with the new running conditions. The new detector electronics is presented, featuring a new read-out board (LUCROD), for signal acquisition and digitization, PMT-charge integration and single-side luminosity measurements, and the revisited LUMAT board for side-A-side-C combination. The contribution covers the new boards design, the firmware and software developments, the implementation of luminosity algorithms, the optical communication between boards and the integration into the ATLAS TDAQ system.

  6. Commissioning of the ATLAS Experiment

    CERN Document Server

    AUTHOR|(CDS)2069446

    2008-01-01

    The status of the commissioning of the ATLAS experiment as of May 2008 is presented. The subdetector integration in recent milestone weeks is described, especially the cosmic commissioning in milestone week M6, focusing on simultaneous running and combined track analysis of the muon detector and inner detector. The liquid argon and tile calorimeters have achieved near-full operation, and are integrated with the calorimeter trigger. The High-Level-Trigger infrastructure is installed and algorithms tested in technical runs. Problems with the inner detector cooling compressors are being fixed.

  7. Advances in ATLAS@Home towards a major ATLAS computing resource

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2018-01-01

    The volunteer computing project ATLAS@Home has been providing a stable computing resource for the ATLAS experiment since 2013. It has recently undergone some significant developments and as a result has become one of the largest resources contributing to ATLAS computing, by expanding its scope beyond traditional volunteers and into exploitation of idle computing power in ATLAS data centres. Removing the need for virtualization on Linux and instead using container technology has made the entry barrier significantly lower data centre participation and in this paper, we describe the implementation and results of this change. We also present other recent changes and improvements in the project. In early 2017 the ATLAS@Home project was merged into a combined LHC@Home platform, providing a unified gateway to all CERN-related volunteer computing projects. The ATLAS Event Service shifts data processing from file-level to event-level and we describe how ATLAS@Home was incorporated into this new paradigm. The finishing...

  8. BEM-DDM modelling of rock damage and its implications on rock laboratory strength and in-situ stresses

    International Nuclear Information System (INIS)

    Matsui, Hiroya

    2008-03-01

    Within the framework of JAEA's Research and Development on deep geological environments for assessing the safety and reliability of the disposal technology for nuclear waste, this study was conducted to determine the effects of sample damage on the strength obtained from laboratory results (uniaxial compression and Brazilian test). Results of testing on samples of Toki granite taken at Shobasama and at the construction site for the Mizunami Underground Research Laboratory (MIU) at Mizunami, Gifu Pref., Japan, were analysed. Some spatial variation of the results along the boreholes suggested the presence of a correlation between the laboratory strength and the in-situ stresses measured by means of the hydro-fracturing method. To confirm this, numerical analyses of the drilling process in brittle rock by means of a BEM-DDM program (FRACOD 2D ) were carried out to study the induced fracture patterns. These fracture patterns were compared with similar results reported by other published studies and were found to be realistic. The correlation between strength and in-situ stresses could then be exploited to estimate the stresses and the location of core discing observed in boreholes where stress measurements were not available. A correction of the laboratory strength results was also proposed to take into account sample damage during drilling. Modelling of Brazilian tests shows that the calculated fracture patterns determine the strength of the models. This is different from the common assumption that failure occurs when the uniform tensile stress in the sample reaches the tensile strength of the rock material. Based on the modelling results, new Brazilian tests were carried out on samples from borehole MIZ-1 that confirmed the failure mechanism numerically observed. A numerical study of the fracture patterns induced by removal of the overburden on a large scale produces fracture patterns and stress distributions corresponding to observations in crystalline hard rock in

  9. Preparation of Northern Mid-Continent Petroleum Atlas

    Energy Technology Data Exchange (ETDEWEB)

    Lee C. Gerhard; Timothy R. Carr; W. Lynn Watney

    1998-05-01

    As proposed, the third year program will continue and expand upon the Kansas elements of the original program, and provide improved on-line access to the prototype atlas. The third year of the program will result in a digital atlas sufficient to provide a permanent improvement in data access to Kansas operators. The ultimate goal of providing an interactive history-matching interface with a regional database will be demonstrated as the program covers more geographic territory and the database expands. The atlas will expand to include significant reservoirs representing the major plays in Kansas, and North Dakota. Primary products of the third year prototype atlas will be on-line accessible digital databases and technical publications covering two additional petroleum plays in Kansas and one in North Dakota. Regional databases will be supplemented with geological field studies of selected fields in each play. Digital imagery, digital mapping, relational data queries, and geographical information systems will be integral to the field studies and regional data sets. Data sets will have relational links to provide opportunity for history-matching, feasibility, and risk analysis tests on contemplated exploration and development projects. The flexible "web-like" design of the atlas provides ready access to data, and technology at a variety of scales from regional, to field, to lease, and finally to the individual well bore. The digital structure of the atlas permits the operator to access comprehensive reservoir data and customize the interpretative products (e.g., maps and cross-sections) to their needs. The atlas will be accessible in digital form on-line using a World-Wide-Web browser as the graphical user interface. Regional data sets and field studies will be freestanding entities that will be made available on-line through the Internet to users as they are completed. Technology transfer activities will be ongoing from the earliest part of this project, providing

  10. The ATLAS Track Extrapolation Package

    CERN Document Server

    Salzburger, A

    2007-01-01

    The extrapolation of track parameters and their associated covariances to destination surfaces of different types is a very frequent process in the event reconstruction of high energy physics experiments. This is amongst other reasons due to the fact that most track and vertex fitting techniques are based on the first and second momentum of the underlying probability density distribution. The correct stochastic or deterministic treatment of interactions with the traversed detector material is hereby crucial for high quality track reconstruction throughout the entire momentum range of final state particles that are produced in high energy physics collision experiments. This document presents the main concepts, the algorithms and the implementation of the newly developed, powerful ATLAS track extrapolation engine. It also emphasises on validation procedures, timing measurements and the integration into the ATLAS offline reconstruction software.

  11. Initial Measurements On Pixel Detector Modules For The ATLAS Upgrades

    CERN Document Server

    Gallrapp, C; The ATLAS collaboration

    2011-01-01

    Sophisticated conditions in terms of peak and integrated luminosity in the Large Hadron Collider (LHC) will raise the ATLAS Pixel detector to its performance limits. Silicon planar, silicon 3D and diamond pixel sensors are three possible sensor technologies which could be implemented in the upcoming pixel detector upgrades of the ATLAS experiment. Measurements of the IV-behavior and measurements with radioactive Americium-241 and Strontium-90 are used to characterize the sensor properties and to understand the interaction between the ATLAS FE-I4 front-end chip and the sensor. Comparisons of results from before and after irradiation, which give a first impression on the charge collection properties of the different sensor technologies are presented.

  12. ATLAS Grid Data Processing: system evolution and scalability

    CERN Document Server

    Golubkov, D; The ATLAS collaboration; Klimentov, A; Minaenko, A; Nevski, P; Vaniachine, A; Walker, R

    2012-01-01

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software & Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users provi...

  13. Jet substructure in ATLAS

    CERN Document Server

    Miller, David W

    2011-01-01

    Measurements are presented of the jet invariant mass and substructure in proton-proton collisions at $\\sqrt{s} = 7$ TeV with the ATLAS detector using an integrated luminosity of 37 pb$^{-1}$. These results exercise the tools for distinguishing the signatures of new boosted massive particles in the hadronic final state. Two "fat" jet algorithms are used, along with the filtering jet grooming technique that was pioneered in ATLAS. New jet substructure observables are compared for the first time to data at the LHC. Finally, a sample of candidate boosted top quark events collected in the 2010 data is analyzed in detail for the jet substructure properties of hadronic "top-jets" in the final state. These measurements demonstrate not only our excellent understanding of QCD in a new energy regime but open the path to using complex jet substructure observables in the search for new physics.

  14. ATLAS diamond Beam Condition Monitor

    Energy Technology Data Exchange (ETDEWEB)

    Gorisek, A. [CERN (Switzerland)]. E-mail: andrej.gorisek@cern.ch; Cindro, V. [J. Stefan Institute (Slovenia); Dolenc, I. [J. Stefan Institute (Slovenia); Frais-Koelbl, H. [Fotec (Austria); Griesmayer, E. [Fotec (Austria); Kagan, H. [Ohio State University, OH (United States); Korpar, S. [J. Stefan Institute (Slovenia); Kramberger, G. [J. Stefan Institute (Slovenia); Mandic, I. [J. Stefan Institute (Slovenia); Meyer, M. [CERN (Switzerland); Mikuz, M. [J. Stefan Institute (Slovenia); Pernegger, H. [CERN (Switzerland); Smith, S. [Ohio State University, OH (United States); Trischuk, W. [University of Toronto (Canada); Weilhammer, P. [CERN (Switzerland); Zavrtanik, M. [J. Stefan Institute (Slovenia)

    2007-03-01

    The ATLAS experiment has chosen to use diamond for its Beam Condition Monitor (BCM) given its radiation hardness, low capacitance and short charge collection time. In addition, due to low leakage current diamonds do not require cooling. The ATLAS Beam Condition Monitoring system is based on single beam bunch crossing measurements rather than integrating the accumulated particle flux. Its fast electronics will allow separation of LHC collisions from background events such as beam gas interactions or beam accidents. There will be two stations placed symmetrically about the interaction point along the beam axis at z=+/-183.8cm. Timing of signals from the two stations will provide almost ideal separation of beam-beam interactions and background events. The ATLAS BCM module consists of diamond pad detectors of 1cm{sup 2} area and 500{mu}m thickness coupled to a two-stage RF current amplifier. The production of the final detector modules is almost done. A S/N ratio of 10:1 has been achieved with minimum ionizing particles (MIPs) in the test beam setup at KEK. Results from the test beams and bench measurements are presented.

  15. ATLAS diamond Beam Condition Monitor

    International Nuclear Information System (INIS)

    Gorisek, A.; Cindro, V.; Dolenc, I.; Frais-Koelbl, H.; Griesmayer, E.; Kagan, H.; Korpar, S.; Kramberger, G.; Mandic, I.; Meyer, M.; Mikuz, M.; Pernegger, H.; Smith, S.; Trischuk, W.; Weilhammer, P.; Zavrtanik, M.

    2007-01-01

    The ATLAS experiment has chosen to use diamond for its Beam Condition Monitor (BCM) given its radiation hardness, low capacitance and short charge collection time. In addition, due to low leakage current diamonds do not require cooling. The ATLAS Beam Condition Monitoring system is based on single beam bunch crossing measurements rather than integrating the accumulated particle flux. Its fast electronics will allow separation of LHC collisions from background events such as beam gas interactions or beam accidents. There will be two stations placed symmetrically about the interaction point along the beam axis at z=+/-183.8cm. Timing of signals from the two stations will provide almost ideal separation of beam-beam interactions and background events. The ATLAS BCM module consists of diamond pad detectors of 1cm 2 area and 500μm thickness coupled to a two-stage RF current amplifier. The production of the final detector modules is almost done. A S/N ratio of 10:1 has been achieved with minimum ionizing particles (MIPs) in the test beam setup at KEK. Results from the test beams and bench measurements are presented

  16. Iterative local Chi2 alignment algorithm for the ATLAS Pixel detector

    CERN Document Server

    Göttfert, Tobias

    The existing local chi2 alignment approach for the ATLAS SCT detector was extended to the alignment of the ATLAS Pixel detector. This approach is linear, aligns modules separately, and uses distance of closest approach residuals and iterations. The derivation and underlying concepts of the approach are presented. To show the feasibility of the approach for Pixel modules, a simplified, stand-alone track simulation, together with the alignment algorithm, was developed with the ROOT analysis software package. The Pixel alignment software was integrated into Athena, the ATLAS software framework. First results and the achievable accuracy for this approach with a simulated dataset are presented.

  17. The effect of morphometric atlas selection on multi-atlas-based automatic brachial plexus segmentation

    International Nuclear Information System (INIS)

    Van de Velde, Joris; Wouters, Johan; Vercauteren, Tom; De Gersem, Werner; Achten, Eric; De Neve, Wilfried; Van Hoof, Tom

    2015-01-01

    The present study aimed to measure the effect of a morphometric atlas selection strategy on the accuracy of multi-atlas-based BP autosegmentation using the commercially available software package ADMIRE® and to determine the optimal number of selected atlases to use. Autosegmentation accuracy was measured by comparing all generated automatic BP segmentations with anatomically validated gold standard segmentations that were developed using cadavers. Twelve cadaver computed tomography (CT) atlases were included in the study. One atlas was selected as a patient in ADMIRE®, and multi-atlas-based BP autosegmentation was first performed with a group of morphometrically preselected atlases. In this group, the atlases were selected on the basis of similarity in the shoulder protraction position with the patient. The number of selected atlases used started at two and increased up to eight. Subsequently, a group of randomly chosen, non-selected atlases were taken. In this second group, every possible combination of 2 to 8 random atlases was used for multi-atlas-based BP autosegmentation. For both groups, the average Dice similarity coefficient (DSC), Jaccard index (JI) and Inclusion index (INI) were calculated, measuring the similarity of the generated automatic BP segmentations and the gold standard segmentation. Similarity indices of both groups were compared using an independent sample t-test, and the optimal number of selected atlases was investigated using an equivalence trial. For each number of atlases, average similarity indices of the morphometrically selected atlas group were significantly higher than the random group (p < 0,05). In this study, the highest similarity indices were achieved using multi-atlas autosegmentation with 6 selected atlases (average DSC = 0,598; average JI = 0,434; average INI = 0,733). Morphometric atlas selection on the basis of the protraction position of the patient significantly improves multi-atlas-based BP autosegmentation accuracy

  18. Role Based Access Control system in the ATLAS experiment

    CERN Document Server

    Valsan, M L; The ATLAS collaboration; Lehmann Miotto, G; Scannicchio, D A; Schlenker, S; Filimonov, V; Khomoutnikov, V; Dumitru, I; Zaytsev, A S; Korol, A A; Bogdantchikov, A; Caramarcu, C; Ballestrero, S; Darlea, G L; Twomey, M; Bujor, F; Avolio, G

    2011-01-01

    The complexity of the ATLAS experiment motivated the deployment of an integrated Access Control System in order to guarantee safe and optimal access for a large number of users to the various software and hardware resources. Such an integrated system was foreseen since the design of the infrastructure and is now central to the operations model. In order to cope with the ever growing needs of restricting access to all resources used within the experiment, the Roles Based Access Control (RBAC) previously developed has been extended and improved. The paper starts with a short presentation of the RBAC design, implementation and the changes made to the system to allow the management and usage of roles to control access to the vast and diverse set of resources. The paper continues with a detailed description of the integration across all areas of the system: local Linux and Windows nodes in the ATLAS Control Network (ATCN), the Linux application gateways offering remote access inside ATCN, the Windows Terminal Serv...

  19. Role Based Access Control System in the ATLAS Experiment

    CERN Document Server

    Valsan, M L; The ATLAS collaboration; Lehmann Miotto, G; Scannicchio, D A; Schlenker, S; Filimonov, V; Khomoutnikov, V; Dumitru, I; Zaytsev, A S; Korol, A A; Bogdantchikov, A; Avolio, G; Caramarcu, C; Ballestrero, S; Darlea, G L; Twomey, M; Bujor, F

    2010-01-01

    The complexity of the ATLAS experiment motivated the deployment of an integrated Access Control System in order to guarantee safe and optimal access for a large number of users to the various software and hardware resources. Such an integrated system was foreseen since the design of the infrastructure and is now central to the operations model. In order to cope with the ever growing needs of restricting access to all resources used within the experiment, the Roles Based Access Control (RBAC) previously developed has been extended and improved. The paper starts with a short presentation of the RBAC design, implementation and the changes made to the system to allow the management and usage of roles to control access to the vast and diverse set of resources. The paper continues with a detailed description of the integration across all areas of the system: local Linux and Windows nodes in the ATLAS Control Network (ATCN), the Linux application gateways offering remote access inside ATCN, the Windows Terminal Serv...

  20. Distributed Analysis Experience using Ganga on an ATLAS Tier2 infrastructure

    International Nuclear Information System (INIS)

    Fassi, F.; Cabrera, S.; Vives, R.; Fernandez, A.; Gonzalez de la Hoz, S.; Sanchez, J.; March, L.; Salt, J.; Kaci, M.; Lamas, A.; Amoros, G.

    2007-01-01

    The ATLAS detector will explore the high-energy frontier of Particle Physics collecting the proton-proton collisions delivered by the LHC (Large Hadron Collider). Starting in spring 2008, the LHC will produce more than 10 Peta bytes of data per year. The adapted tiered hierarchy for computing model at the LHC is: Tier-0 (CERN), Tiers-1 and Tiers-2 centres distributed around the word. The ATLAS Distributed Analysis (DA) system has the goal of enabling physicists to perform Grid-based analysis on distributed data using distributed computing resources. IFIC Tier-2 facility is participating in several aspects of DA. In support of the ATLAS DA activities a prototype is being tested, deployed and integrated. The analysis data processing applications are based on the Athena framework. GANGA, developed by LHCb and ATLAS experiments, allows simple switching between testing on a local batch system and large-scale processing on the Grid, hiding Grid complexities. GANGA deals with providing physicists an integrated environment for job preparation, bookkeeping and archiving, job splitting and merging. The experience with the deployment, configuration and operation of the DA prototype will be presented. Experiences gained of using DA system and GANGA in the Top physics analysis will be described. (Author)

  1. ATLAS RPC Quality Assurance results at INFN Lecce

    CERN Document Server

    INSPIRE-00211509; Borjanovic, I.; Cataldi, G.; Cazzato, A.; Chiodini, G.; Coluccia, M. R.; Creti, P.; Gorini, E.; Grancagnolo, F.; Perrino, R.; Primavera, M.; Spagnolo, S.; Tassielli, G.; Ventura, A.

    2006-01-01

    The main results of the quality assurance tests performed on the Resistive Plate Chamber used by the ATLAS experiment at LHC as muon trigger chambers are reported and discussed. Since July 2004, about 270 RPC units has been certified at INFN Lecce site and delivered to CERN, for being integrated in the final muon station of the ATLAS barrel region. We show the key RPC characteristics which qualify the performance of this detector technology as muon trigger chamber in the harsh LHC enviroments. These are dark current, chamber efficiency, noise rate, gas volume tomography, and gas leakage.

  2. Study of ZZ to four leptons events in ATLAS at the LHC and upgrade of the ATLAS Muon Spectrometer

    CERN Multimedia

    Kouskoura, V

    2014-01-01

    The study of the ZZ and ZZ* production in proton-proton collisions at the Large Hadron Collider (LHC) at CERN is presented. The data analyzed in this study were recorded by the ATLAS experiment at a centre-of-mass energy of 7 TeV and of 8 TeV. The selected events are consistent with fully leptonic ZZ decays, in particular to electrons and muons. The total ZZ production cross section is measured and is found to be in agreement with the Standard Model (SM) prediction. The ZZ production allows the study of the anomalous neutral Triple Gauge Couplings. No deviation from the SM prediction is found that could indicate the presence of New Physics. In view of the forthcoming increase of the instantaneous luminosity of the LHC, the ATLAS Collaboration foresees upgrades of the detector. An upgrade of the Muon Spectrometer is presented. The integration of the new detection elements in the ATLAS Geometry is illustrated, as well as the increase in the total Barrel acceptance.

  3. The ATLAS ITk strip detector. Status of R&D

    Energy Technology Data Exchange (ETDEWEB)

    García Argos, Carlos, E-mail: carlos.garcia.argos@cern.ch

    2017-02-11

    While the LHC at CERN is ramping up luminosity after the discovery of the Higgs Boson in the ATLAS and CMS experiments in 2012, upgrades to the LHC and experiments are planned. The major upgrade is foreseen for 2024, with a roughly tenfold increase in luminosity, resulting in corresponding increases in particle rates and radiation doses. In ATLAS the entire Inner Detector will be replaced for Phase-II running with an all-silicon system. This paper concentrates on the strip part. Its layout foresees low-mass and modular yet highly integrated double-sided structures for the barrel and forward region. The design features conceptually simple modules made from electronic hybrids glued directly onto the silicon. Modules will then be assembled on both sides of large carbon-core structures with integrated cooling and electrical services.

  4. Silicon strip detectors for the ATLAS HL-LHC upgrade

    CERN Document Server

    Gonzalez Sevilla, S; The ATLAS collaboration

    2011-01-01

    The LHC upgrade is foreseen to increase the ATLAS design luminosity by a factor ten, implying the need to build a new tracker suited to the harsh HL-LHC conditions in terms of particle rates and radiation doses. In order to cope with the increase in pile-up backgrounds at the higher luminosity, an all silicon detector is being designed. To successfully face the increased radiation dose, a new generation of extremely radiation hard silicon detectors is being designed. We give an overview of the ATLAS tracker upgrade project, in particular focusing on the crucial innermost silicon strip layers. Results from a wide range of irradiated silicon detectors for the strip region of the future ATLAS tracker are presented. Layout concepts for lightweight yet mechanically very rigid detector modules with high service integration are shown.

  5. ATLAS Outreach Highlights

    CERN Document Server

    Cheatham, Susan; The ATLAS collaboration

    2016-01-01

    The ATLAS outreach team is very active, promoting particle physics to a broad range of audiences including physicists, general public, policy makers, students and teachers, and media. A selection of current outreach activities and new projects will be presented. Recent highlights include the new ATLAS public website and ATLAS Open Data, the very recent public release of 1 fb-1 of ATLAS data.

  6. Electron identification with the ATLAS detector

    CERN Document Server

    Tarna, Grigore; The ATLAS collaboration

    2017-01-01

    Electron identification is a crucial input to many ATLAS physics analysis. The electron identification used in ATLAS for run 2 is based on a likelihood discrimination to separate isolated electron candidates from candidates originating from photon conversions, hadron misidentification and heavy flavor decays. In addition, isolation variables are used as further handles to separate signal and background. The measurement of the efficiencies of the electron identification and isolationcuts are performed with the data using tag and probe techniques with large statistics sample of Z->ee and J/psi->ee decays. These measurements performed with pp collisions data at sqrt(s)=13 TeV in 2016 (2015) corresponding to an integrated luminosity of 33.9 (3.2) fb-1 of sqrt(s)=13 TeV pp are presented.

  7. Electron identification with the ATLAS detector

    CERN Document Server

    Tarna, Grigore; The ATLAS collaboration

    2017-01-01

    Electron identification is a crucial input to many ATLAS physics analysis. The electron identification used in ATLAS for run 2 is based on a likelihood discrimination to separate isolated electron candidates from candidates originating from photon conversions, hadron misidentification and heavy flavor decays. In addition, isolation variables are used as further handles to separate signal and background. The measurement of the efficiencies of the electron identification and isolationcuts are performed with the data using tag and probe techniques with large statistics sample of Z->ee and J/psi->ee decays. These measurements performed with pp collisions data at sqrt(s)=13 TeV in 2016 (2015) corresponding to an integrated luminosity of 33.9 (3.1)fb-1 of sqrt(s)=13 TeV pp are presented.

  8. FATRAS - the ATLAS Fast Track Simulation project

    NARCIS (Netherlands)

    Mechnich, J.

    2011-01-01

    The Monte Carlo simulation of the detector response is an integral component of any analysis performed with data from the LHC experiments. As these simulated data sets must be both large and precise, their production is a CPU-intensive task. ATLAS has developed full and fast detector simulation

  9. CERN Open Days 2013, Point 1 - ATLAS: ATLAS Experiment

    CERN Multimedia

    CERN Photolab

    2013-01-01

    Stand description: The ATLAS Experiment at CERN is one of the largest and most complex scientific endeavours ever assembled. The detector, located at collision point 1 of the LHC, is designed to explore the fundamental components of nature and to study the forces that shape our universe. The past year’s discovery of a Higgs boson is one of the most important scientific achievements of our time, yet this is only one of many key goals of ATLAS. During a brief break in their journey, some of the 3000-member ATLAS collaboration will be taking time to share the excitement of this exploration with you. On surface no restricted access  The exhibit at Point 1 will give visitors a chance to meet these modern-day explorers and to learn from them how answers to the most fundamental questions of mankind are being sought. Activities will include a visit to the ATLAS detector, located 80m below ground; watching the prize-winning ATLAS movie in the ATLAS cinema; seeing real particle tracks in a cloud chamber and discussi...

  10. The simulation for the ATLAS experiment Present status and outlook

    CERN Document Server

    Rimoldi, A; Gallas, M; Nairz, A; Boudreau, J; Tsulaia, V; Costanzo, D

    2004-01-01

    The simulation program for the ATLAS experiment is presently operational in a full OO environment. This important physics application has been successfully integrated into ATLAS's common analysis framework, ATHENA. In the last year, following a well stated strategy of transition from a GEANT3 to a GEANT4-based simulation, a careful validation programme confirmed the reliability, performance and robustness of this new tool, as well as its consistency with the results of previous simulation. Generation, simulation and digitization steps on different sets of full physics events we retested for performance. The same software used to simulate the full the ATLAS detector is also used with testbeam configurations. Comparisons to real data in the testbeam validate both the detector description and the physics processes within each subcomponent. In this paper we present the current status of ATLAS GEANT4 simulation, describe the functionality tests performed during its validation phase, and the experience with distrib...

  11. Physics potential of ATLAS upgrades at HL-LHC

    CERN Document Server

    Testa, Marianna; The ATLAS collaboration

    2017-01-01

    The High Luminosity-Large Hadron Collider (HL-LHC) is expected to start in 2026 and to pro- vide an integrated luminosity of 3000 fb−1 in ten years, a factor 10 more than what will be collected by 2023. This high statistics will allow ATLAS to perform precise measurements in the Higgs sector and improve searches for new physics at the TeV scale. The luminosity needed is L ∼ 7.51034 cm−2 s−1, corresponding to ∼200 additional proton-proton pile- up interactions. To face such harsh environment some sub-detectors of the ATLAS experiment will be upgraded or completely substituted. The performances of the new or upgraded ATLAS sub-detectors are presented, focusing in particular on the new inner tracker and a proposed high granularity time device. The impact of those upgrades on crucial physics measurements for HL-LHC program is also shown.

  12. Recent Improvements in the ATLAS PanDA Pilot

    International Nuclear Information System (INIS)

    Nilsson, P; De, K; Bejar, J Caballero; Maeno, T; Potekhin, M; Wenaus, T; Compostella, G; Contreras, C; Dos Santos, T

    2012-01-01

    The Production and Distributed Analysis system (PanDA) in the ATLAS experiment uses pilots to execute submitted jobs on the worker nodes. The pilots are designed to deal with different runtime conditions and failure scenarios, and support many storage systems. This talk will give a brief overview of the PanDA pilot system and will present major features and recent improvements including CernVM File System integration, the job retry mechanism, advanced job monitoring including JEM technology, and validation of new pilot code using the HammerCloud stress-testing system. PanDA is used for all ATLAS distributed production and is the primary system for distributed analysis. It is currently used at over 130 sites worldwide. We analyze the performance of the pilot system in processing LHC data on the OSG, EGI and Nordugrid infrastructures used by ATLAS, and describe plans for its further evolution.

  13. An atlas of high-resolution IRAS maps on nearby galaxies

    Science.gov (United States)

    Rice, Walter

    1993-01-01

    An atlas of far-infrared IRAS maps with near 1 arcmin angular resolution of 30 optically large galaxies is presented. The high-resolution IRAS maps were produced with the Maximum Correlation Method (MCM) image construction and enhancement technique developed at IPAC. The MCM technique, which recovers the spatial information contained in the overlapping detector data samples of the IRAS all-sky survey scans, is outlined and tests to verify the structural reliability and photometric integrity of the high-resolution maps are presented. The infrared structure revealed in individual galaxies is discussed. The atlas complements the IRAS Nearby Galaxy High-Resolution Image Atlas, the high-resolution galaxy images encoded in FITS format, which is provided to the astronomical community as an IPAC product.

  14. Initial Measurements on Pixel Detector Modules for the ATLAS Upgrades

    CERN Document Server

    Gallrapp, C; The ATLAS collaboration

    2011-01-01

    Delicate conditions in terms of peak and integrated luminosity in the Large Hadron Collider (LHC) will raise the ATLAS Pixel Detector to its performance limits. Silicon planar, silicon 3D and diamond pixel sensors are three possible sensor technologies which could be implemented in the upcoming Pixel Detector upgrades of the ATLAS experiment. Measurements of the IV-behavior and measurements with radioactive Americium-241 and Strontium-90 are used to characterize the sensor properties and to understand the interaction between the ATLAS FE-I4 front-end chip and the sensor. Comparisons of results from before and after irradiation for silicon planar and 3D pixel sensors, which give a first impression on the charge collection properties of the different sensor technologies, are presented.

  15. ATLAS Distributed Analysis Tools

    CERN Document Server

    Gonzalez de la Hoz, Santiago; Liko, Dietrich

    2008-01-01

    The ATLAS production system has been successfully used to run production of simulation data at an unprecedented scale. Up to 10000 jobs were processed in one day. The experiences obtained operating the system on several grid flavours was essential to perform a user analysis using grid resources. First tests of the distributed analysis system were then performed. In the preparation phase data was registered in the LHC File Catalog (LFC) and replicated in external sites. For the main test, few resources were used. All these tests are only a first step towards the validation of the computing model. The ATLAS management computing board decided to integrate the collaboration efforts in distributed analysis in only one project, GANGA. The goal is to test the reconstruction and analysis software in a large scale Data production using Grid flavors in several sites. GANGA allows trivial switching between running test jobs on a local batch system and running large-scale analyses on the Grid; it provides job splitting a...

  16. Digital signal integrity and stability in the ATLAS Level-1 Calorimeter Trigger

    CERN Document Server

    Achenbach, R; Aharrouche, M; Andrei, V; Åsman, B; Barnett, B M; Bauss, B; Bendel, M; Bohm, C; Booth, J R A; Bracinik, J; Brawn, I P; Charlton, D G; Childers, J T; Collins, N J; Curtis, C J; Davis, A O; Eckweiler, S; Eisenhandler, E F; Faulkner, P J W; Fleckner, J; Föhlisch, F; Gee, C N P; Gillman, A R; Goringer, C; Groll, M; Hadley, D R; Hanke, P; Hellman, S; Hidvegi, A; Hillier, S J; Johansen, M; Kluge, E E; Kühl, T; Landon, M; Lendermann, V; Lilley, J N; Mahboubi, K; Mahout, G; Meier, K; Middleton, R P; Moa, T; Morris, J D; Müller, F; Neusiedl, A; Ohm, C; Oltmann, B; Perera, V J O; Prieur, D P F; Qian, W; Rieke, S; Rühr, F; Sankey, D P C; Schäfer, U; Schmitt, K; Schultz-Coulon, H C; Silverstein, S; Sjölin, J; Staley, R J; Stamen, R; Stockton, M C; Tan, C L A; Tapprogge, S; Thomas, J P; Thompson, P D; Watkins, P M; Watson, A; Weber, P; Wessels, M; Wildt, M

    2008-01-01

    The ATLAS Level-1 calorimeter trigger is a hardware-based system with the goal of identifying high-pT objects and to measure total and missing ET in the ATLAS calorimeters within an overall latency of 2.5 microseconds. This trigger system is composed of the Preprocessor which digitises about 7200 analogue input channels and two digital processors to identify high-pT signatures and to calculate the energy sums. The digital part consists of multi-stage, pipelined custom-built modules. The high demands on connectivity between the initial analogue stage and digital part and between the custom-built modules are presented. Furthermore the techniques to establish timing regimes and verify connectivity and stable operation of these digital links will be described.

  17. ATLAS DDM/DQ2 & NoSQL databases: Use cases and experiences

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    NoSQL databases. This includes distributed file system like HDFS that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value/document stores, like HBase, Cassandra or MongoDB. These databases provide solutions to particular types...

  18. ATLAS TDAQ System Integration and Commissioning

    CERN Document Server

    Negri, A

    2010-01-01

    The ATLAS detector will be exposed to proton proton collisions at a center of mass energy of 14 TeV with the bunch crossing rate of 40 MHz. A three-level trigger system has been designed to reduce this rate down to the level at which only interesting events are fully reconstructed. The level 1 trigger reduces the rate down to 75 kHz via custom-built electronics. The Region of Interest Builder delivers the Region of Interest records to the second level trigger which runs the selection algorithms with the commodity processors and brings the rate further down to ~ 3.5 kHz. Finally the Event Filter reduces the rate down to ~ 200 Hz for permanent storage. We review the trigger and data acquisition architecture and its in situ commissioning using almost full detectors. Results on system functionality and performance based on the cosmic data, early experience on LHC beam in 2008 and preselected simulated events are presented.

  19. Z-dark search with the ATLAS detector

    CERN Document Server

    INSPIRE-00212108

    2016-01-01

    The search of the "hidden sector" via new neutral light bosons Z-dark ($Z_{d}$) could be revealed by the study of the decay of the discovered Higgs-like boson or any other undiscovered Higgs boson. After the LHC concluded a successful first period of running, the ATLAS Collaboration published its latest results on the $H\\rightarrow Z_{d}Z_{d}\\rightarrow 4l$ analysis using up to 20 fb$^{-1}$ of integrated luminosity at $\\sqrt{s}=8$ TeV. In this proceeding I present a summary of the recent results on the search of the $Z_{d}$ in the signature $H\\rightarrow Z_{d}Z_{d}\\rightarrow 4l$ with the ATLAS detector at the LHC.

  20. Calculation Sheet for the Basic Design of the ATLAS Fluid System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hyun Sik; Moon, S. K.; Yun, B. J.; Kwon, T. S.; Choi, K. Y.; Cho, S.; Park, C. K.; Lee, S. J.; Kim, Y. S.; Song, C. H.; Baek, W. P.; Hong, S. D

    2007-03-15

    The basic design of an integral effect test loop for pressurized water reactors (PWRs), the ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been carried out by Thermal-Hydraulics Safety Research Team in Korea Atomic Energy Research Institute (KAERI). The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400, and is scaled for full pressure and temperature conditions. This report includes calculation sheets for the basic design of ATLAS fluid systems, which are consisted of a reactor pressure vessel with core simulator, the primary loop piping, a pressurizer, reactor coolant pumps, steam generators, the secondary system, the safety system, the auxiliary system, and the heat loss compensation system. The present calculation sheets will be used to help understanding the basic design of the ATLAS fluid system and its based scaling methodology.

  1. Calculation Sheet for the Basic Design of the ATLAS Fluid System

    International Nuclear Information System (INIS)

    Park, Hyun Sik; Moon, S. K.; Yun, B. J.; Kwon, T. S.; Choi, K. Y.; Cho, S.; Park, C. K.; Lee, S. J.; Kim, Y. S.; Song, C. H.; Baek, W. P.; Hong, S. D.

    2007-03-01

    The basic design of an integral effect test loop for pressurized water reactors (PWRs), the ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been carried out by Thermal-Hydraulics Safety Research Team in Korea Atomic Energy Research Institute (KAERI). The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400, and is scaled for full pressure and temperature conditions. This report includes calculation sheets for the basic design of ATLAS fluid systems, which are consisted of a reactor pressure vessel with core simulator, the primary loop piping, a pressurizer, reactor coolant pumps, steam generators, the secondary system, the safety system, the auxiliary system, and the heat loss compensation system. The present calculation sheets will be used to help understanding the basic design of the ATLAS fluid system and its based scaling methodology

  2. ATLAS Thesis Award 2017

    CERN Multimedia

    Anthony, Katarina

    2018-01-01

    Winners of the ATLAS Thesis Award were presented with certificates and glass cubes during a ceremony on 22 February, 2018. They are pictured here with Karl Jakobs (ATLAS Spokesperson), Max Klein (ATLAS Collaboration Board Chair) and Katsuo Tokushuku (ATLAS Collaboration Board Deputy Chair).

  3. ATLAS

    CERN Multimedia

    Akhnazarov, V; Canepa, A; Bremer, J; Burckhart, H; Cattai, A; Voss, R; Hervas, L; Kaplon, J; Nessi, M; Werner, P; Ten kate, H; Tyrvainen, H; Vandelli, W; Krasznahorkay, A; Gray, H; Alvarez gonzalez, B; Eifert, T F; Rolando, G; Oide, H; Barak, L; Glatzer, J; Backhaus, M; Schaefer, D M; Maciejewski, J P; Milic, A; Jin, S; Von torne, E; Limbach, C; Medinnis, M J; Gregor, I; Levonian, S; Schmitt, S; Waananen, A; Monnier, E; Muanza, S G; Pralavorio, P; Talby, M; Tiouchichine, E; Tocut, V M; Rybkin, G; Wang, S; Lacour, D; Laforge, B; Ocariz, J H; Bertoli, W; Malaescu, B; Sbarra, C; Yamamoto, A; Sasaki, O; Koriki, T; Hara, K; Da silva gomes, A; Carvalho maneira, J; Marcalo da palma, A; Chekulaev, S; Tikhomirov, V; Snesarev, A; Buzykaev, A; Maslennikov, A; Peleganchuk, S; Sukharev, A; Kaplan, B E; Swiatlowski, M J; Nef, P D; Schnoor, U; Oakham, G F; Ueno, R; Orr, R S; Abouzeid, O; Haug, S; Peng, H; Kus, V; Vitek, M; Temming, K K; Dang, N P; Meier, K; Schultz-coulon, H; Geisler, M P; Sander, H; Schaefer, U; Ellinghaus, F; Rieke, S; Nussbaumer, A; Liu, Y; Richter, R; Kortner, S; Fernandez-bosman, M; Ullan comes, M; Espinal curull, J; Chiriotti alvarez, S; Caubet serrabou, M; Valladolid gallego, E; Kaci, M; Carrasco vela, N; Lancon, E C; Besson, N E; Gautard, V; Bracinik, J; Bartsch, V C; Potter, C J; Lester, C G; Moeller, V A; Rosten, J; Crooks, D; Mathieson, K; Houston, S C; Wright, M; Jones, T W; Harris, O B; Byatt, T J; Dobson, E; Hodgson, P; Hodgkinson, M C; Dris, M; Karakostas, K; Ntekas, K; Oren, D; Duchovni, E; Etzion, E; Oren, Y; Ferrer, L M; Testa, M; Doria, A; Merola, L; Sekhniaidze, G; Giordano, R; Ricciardi, S; Milazzo, A; Falciano, S; De pedis, D; Dionisi, C; Veneziano, S; Cardarelli, R; Verzegnassi, C; Soualah, R; Ochi, A; Ohshima, T; Kishiki, S; Linde, F L; Vreeswijk, M; Werneke, P; Muijs, A; Vankov, P H; Jansweijer, P P M; Dale, O; Lund, E; Bruckman de renstrom, P; Dabrowski, W; Adamek, J D; Wolters, H; Micu, L; Pantea, D; Tudorache, V; Mjoernmark, J; Klimek, P J; Ferrari, A; Abdinov, O; Akhoundov, A; Hashimov, R; Shelkov, G; Khubua, J; Ladygin, E; Lazarev, A; Glagolev, V; Dedovich, D; Lykasov, G; Zhemchugov, A; Zolnikov, Y; Ryabenko, M; Sivoklokov, S; Vasilyev, I; Shalimov, A; Lobanov, M; Paramoshkina, E; Mosidze, M; Bingul, A; Nodulman, L J; Guarino, V J; Yoshida, R; Drake, G R; Calafiura, P; Haber, C; Quarrie, D R; Alonso, J R; Anderson, C; Evans, H; Lammers, S W; Baubock, M; Anderson, K; Petti, R; Suhr, C A; Linnemann, J T; Richards, R A; Tollefson, K A; Holzbauer, J L; Stoker, D P; Pier, S; Nelson, A J; Isakov, V; Martin, A J; Adelman, J A; Paganini, M; Gutierrez, P; Snow, J M; Pearson, B L; Cleland, W E; Savinov, V; Wong, W; Goodson, J J; Li, H; Lacey, R A; Gordeev, A; Gordon, H; Lanni, F; Nevski, P; Rescia, S; Kierstead, J A; Liu, Z; Yu, W W H; Bensinger, J; Hashemi, K S; Bogavac, D; Cindro, V; Hoeferkamp, M R; Coelli, S; Iodice, M; Piegaia, R N; Alonso, F; Wahlberg, H P; Barberio, E L; Limosani, A; Rodd, N L; Jennens, D T; Hill, E C; Pospisil, S; Smolek, K; Schaile, D A; Rauscher, F G; Adomeit, S; Mattig, P M; Wahlen, H; Volkmer, F; Calvente lopez, S; Sanchis peris, E J; Pallin, D; Podlyski, F; Says, L; Boumediene, D E; Scott, W; Phillips, P W; Greenall, A; Turner, P; Gwilliam, C B; Kluge, T; Wrona, B; Sellers, G J; Millward, G; Adragna, P; Hartin, A; Alpigiani, C; Piccaro, E; Bret cano, M; Hughes jones, R E; Mercer, D; Oh, A; Chavda, V S; Carminati, L; Cavasinni, V; Fedin, O; Patrichev, S; Ryabov, Y; Nesterov, S; Grebenyuk, O; Sasso, J; Mahmood, H; Polsdofer, E; Dai, T; Ferretti, C; Liu, H; Hegazy, K H; Benjamin, D P; Zobernig, G; Ban, J; Brooijmans, G H; Keener, P; Williams, H H; Le geyt, B C; Hines, E J; Fadeyev, V; Schumm, B A; Law, A T; Kuhl, A D; Neubauer, M S; Shang, R; Gagliardi, G; Calabro, D; Conta, C; Zinna, M; Jones, G; Li, J; Stradling, A R; Hadavand, H K; Mcguigan, P; Chiu, P; Baldelomar, E; Stroynowski, R A; Kehoe, R L; De groot, N; Timmermans, C; Lach-heb, F; Addy, T N; Nakano, I; Moreno lopez, D; Grosse-knetter, J; Tyson, B; Rude, G D; Tafirout, R; Benoit, P; Danielsson, H O; Elsing, M; Fassnacht, P; Froidevaux, D; Ganis, G; Gorini, B; Lasseur, C; Lehmann miotto, G; Kollar, D; Aleksa, M; Sfyrla, A; Duehrssen-debling, K; Fressard-batraneanu, S; Van der ster, D C; Bortolin, C; Schumacher, J; Mentink, M; Geich-gimbel, C; Yau wong, K H; Lafaye, R; Crepe-renaudin, S; Albrand, S; Hoffmann, D; Pangaud, P; Meessen, C; Hrivnac, J; Vernay, E; Perus, A; Henrot versille, S L; Le dortz, O; Derue, F; Piccinini, M; Polini, A; Terada, S; Arai, Y; Ikeno, M; Fujii, H; Nagano, K; Ukegawa, F; Aguilar saavedra, J A; Conde muino, P; Castro, N F; Eremin, V; Kopytine, M; Sulin, V; Tsukerman, I; Korol, A; Nemethy, P; Bartoldus, R; Glatte, A; Chelsky, S; Van nieuwkoop, J; Bellerive, A; Sinervo, J K; Battaglia, A; Barbier, G J; Pohl, M; Rosselet, L; Alexandre, G B; Prokoshin, F; Pezoa rivera, R A; Batkova, L; Kladiva, E; Stastny, J; Kubes, T; Vidlakova, Z; Esch, H; Homann, M; Herten, L G; Zimmermann, S U; Pfeifer, B; Stenzel, H; Andrei, G V; Wessels, M; Buescher, V; Kleinknecht, K; Fiedler, F M; Schroeder, C D; Fernandez, E; Mir martinez, L; Vorwerk, V; Bernabeu verdu, J; Salt, J; Civera navarrete, J V; Bernard, R; Berriaud, C P; Chevalier, L P; Hubbard, R; Schune, P; Nikolopoulos, K; Batley, J R; Brochu, F M; Phillips, A W; Teixeira-dias, P J; Rose, M B D; Buttar, C; Buckley, A G; Nurse, E L; Larner, A B; Boddy, C; Henderson, J; Costanzo, D; Tarem, S; Maccarrone, G; Laurelli, P F; Alviggi, M; Chiaramonte, R; Izzo, V; Palumbo, V; Fraternali, M; Crosetti, G; Marchese, F; Yamaguchi, Y; Hessey, N P; Mechnich, J M; Liebig, W; Kastanas, K A; Sjursen, T B; Zalieckas, J; Cameron, D G; Banka, P; Kowalewska, A B; Dwuznik, M; Mindur, B; Boldea, V; Hedberg, V; Smirnova, O; Sellden, B; Allahverdiyev, T; Gornushkin, Y; Koultchitski, I; Tokmenin, V; Chizhov, M; Gongadze, A; Khramov, E; Sadykov, R; Krasnoslobodtsev, I; Smirnova, L; Kramarenko, V; Minaenko, A; Zenin, O; Beddall, A J; Ozcan, E V; Hou, S; Wang, S; Moyse, E; Willocq, S; Chekanov, S; Le compte, T J; Love, J R; Ciocio, A; Hinchliffe, I; Tsulaia, V; Gomez, A; Luehring, F; Zieminska, D; Huth, J E; Gonski, J L; Oreglia, M; Tang, F; Shochet, M J; Costin, T; Mcleod, A; Uzunyan, S; Martin, S P; Pope, B G; Schwienhorst, R H; Brau, J E; Ptacek, E S; Milburn, R H; Sabancilar, E; Lauer, R; Saleem, M; Mohamed meera lebbai, M R; Lou, X; Reeves, K B; Rijssenbeek, M; Novakova, P N; Rahm, D; Steinberg, P A; Wenaus, T J; Paige, F; Ye, S; Kotcher, J R; Assamagan, K A; Oliveira damazio, D; Maeno, T; Henry, A; Dushkin, A; Costa, G; Meroni, C; Resconi, S; Lari, T; Biglietti, M; Lohse, T; Gonzalez silva, M L; Monticelli, F G; Saavedra, A F; Patel, N D; Ciodaro xavier, T; Asevedo nepomuceno, A; Lefebvre, M; Albert, J E; Kubik, P; Faltova, J; Turecek, D; Solc, J; Schaile, O; Ebke, J; Losel, P J; Zeitnitz, C; Sturm, P D; Barreiro alonso, F; Modesto alapont, P; Soret medel, J; Garzon alama, E J; Gee, C N; Mccubbin, N A; Sankey, D; Emeliyanov, D; Dewhurst, A L; Houlden, M A; Klein, M; Burdin, S; Lehan, A K; Eisenhandler, E; Lloyd, S; Traynor, D P; Ibbotson, M; Marshall, R; Pater, J; Freestone, J; Masik, J; Haughton, I; Manousakis katsikakis, A; Sampsonidis, D; Krepouri, A; Roda, C; Sarri, F; Fukunaga, C; Nadtochiy, A; Kara, S O; Timm, S; Alam, S M; Rashid, T; Goldfarb, S; Espahbodi, S; Marley, D E; Rau, A W; Dos anjos, A R; Haque, S; Grau, N C; Havener, L B; Thomson, E J; Newcomer, F M; Hansl-kozanecki, G; Deberg, H A; Takeshita, T; Goggi, V; Ennis, J S; Olness, F I; Kama, S; Ordonez sanz, G; Koetsveld, F; Elamri, M; Mansoor-ul-islam, S; Lemmer, B; Kawamura, G; Bindi, M; Schulte, S; Kugel, A; Kretz, M P; Kurchaninov, L; Blanchot, G; Chromek-burckhart, D; Di girolamo, B; Francis, D; Gianotti, F; Nordberg, M Y; Pernegger, H; Roe, S; Boyd, J; Wilkens, H G; Pauly, T; Fabre, C; Tricoli, A; Bertet, D; Ruiz martinez, M A; Arnaez, O L; Lenzi, B; Boveia, A J; Gillberg, D I; Davies, J M; Zimmermann, R; Uhlenbrock, M; Kraus, J K; Narayan, R T; John, A; Dam, M; Padilla aranda, C; Bellachia, F; Le flour chollet, F M; Jezequel, S; Dumont dayot, N; Fede, E; Mathieu, M; Gensolen, F D; Alio, L; Arnault, C; Bouchel, M; Ducorps, A; Kado, M M; Lounis, A; Zhang, Z P; De vivie de regie, J; Beau, T; Bruni, A; Bruni, G; Grafstrom, P; Romano, M; Lasagni manghi, F; Massa, L; Shaw, K; Ikegami, Y; Tsuno, S; Kawanishi, Y; Benincasa, G; Blagov, M; Fedorchuk, R; Shatalov, P; Romaniouk, A; Belotskiy, K; Timoshenko, S; Hooft van huysduynen, L; Lewis, G H; Wittgen, M M; Mader, W F; Rudolph, C J; Gumpert, C; Mamuzic, J; Rudolph, G; Schmid, P; Corriveau, F; Belanger-champagne, C; Yarkoni, S; Leroy, C; Koffas, T; Harack, B D; Weber, M S; Beck, H; Leger, A; Gonzalez sevilla, S; Zhu, Y; Gao, J; Zhang, X; Blazek, T; Rames, J; Sicho, P; Kouba, T; Sluka, T; Lysak, R; Ristic, B; Kompatscher, A E; Von radziewski, H; Groll, M; Meyer, C P; Oberlack, H; Stonjek, S M; Cortiana, G; Werthenbach, U; Ibragimov, I; Czirr, H S; Cavalli-sforza, M; Puigdengoles olive, C; Tallada crespi, P; Marti i garcia, S; Gonzalez de la hoz, S; Guyot, C; Meyer, J; Schoeffel, L O; Garvey, J; Hawkes, C; Hillier, S J; Staley, R J; Salvatore, P F; Santoyo castillo, I; Carter, J; Yusuff, I B; Barlow, N R; Berry, T S; Savage, G; Wraight, K G; Steele, G E; Hughes, G; Walder, J W; Love, P A; Crone, G J; Waugh, B M; Boeser, S; Sarkar, A M; Holmes, A; Massey, R; Pinder, A; Nicholson, R; Korolkova, E; Katsoufis, I; Maltezos, S; Tsipolitis, G; Leontsinis, S; Levinson, L J; Shoa, M; Abramowicz, H E; Bella, G; Gershon, A; Urkovsky, E; Taiblum, N; Gatti, C; Della pietra, M; Lanza, A; Negri, A; Flaminio, V; Lacava, F; Petrolo, E; Pontecorvo, L; Rosati, S; Zanello, L; Pasqualucci, E; Di ciaccio, A; Giordani, M; Yamazaki, Y; Jinno, T; Nomachi, M; De jong, P J; Ferrari, P; Homma, J; Van der graaf, H; Igonkina, O B; Stugu, B S; Buanes, T; Pedersen, M; Turala, M; Olszewski, A J; Koperny, S Z; Onofre, A; Castro nunes fiolhais, M; Alexa, C; Cuciuc, C M; Akesson, T P A; Hellman, S L; Milstead, D A; Bondyakov, A; Pushnova, V; Budagov, Y; Minashvili, I; Romanov, V; Sniatkov, V; Tskhadadze, E; Kalinovskaya, L; Shalyugin, A; Tavkhelidze, A; Rumyantsev, L; Karpov, S; Soloshenko, A; Vostrikov, A; Borissov, E; Solodkov, A; Vorob'ev, A; Sidorov, S; Malyaev, V; Lee, S; Grudzinski, J J; Virzi, J S; Vahsen, S E; Lys, J; Penwell, J W; Yan, Z; Bernard, C S; Barreiro guimaraes da costa, J P; Oliver, J N; Merritt, F S; Brubaker, E M; Kapliy, A; Kim, J; Zutshi, V V; Burghgrave, B O; Abolins, M A; Arabidze, G; Caughron, S A; Frey, R E; Radloff, P T; Schernau, M; Murillo garcia, R; Porter, R A; Mccormick, C A; Karn, P J; Sliwa, K J; Demers konezny, S M; Strauss, M G; Mueller, J A; Izen, J M; Klimentov, A; Lynn, D; Polychronakos, V; Radeka, V; Sondericker, J I I I; Bathe, S; Duffin, S; Chen, H; De castro faria salgado, P E; Kersevan, B P; Lacker, H M; Schulz, H; Kubota, T; Tan, K G; Yabsley, B D; Nunes de moura junior, N; Pinfold, J; Soluk, R A; Ouellette, E A; Leitner, R; Sykora, T; Solar, M; Sartisohn, G; Hirschbuehl, D; Huning, D; Fischer, J; Terron cuadrado, J; Glasman kuguel, C B; Lacasta llacer, C; Lopez-amengual, J; Calvet, D; Chevaleyre, J; Daudon, F; Montarou, G; Guicheney, C; Calvet, S P J; Tyndel, M; Dervan, P J; Maxfield, S J; Hayward, H S; Beck, G; Cox, B; Da via, C; Paschalias, P; Manolopoulou, M; Ragusa, F; Cimino, D; Ezzi, M; Fiuza de barros, N F; Yildiz, H; Ciftci, A K; Turkoz, S; Zain, S B; Tegenfeldt, F; Chapman, J W; Panikashvili, N; Bocci, A; Altheimer, A D; Martin, F F; Fratina, S; Jackson, B D; Grillo, A A; Seiden, A; Watts, G T; Mangiameli, S; Johns, K A; O'grady, F T; Errede, D R; Darbo, G; Ferretto parodi, A; Leahu, M C; Farbin, A; Ye, J; Liu, T; Wijnen, T A; Naito, D; Takashima, R; Sandoval usme, C E; Zinonos, Z; Moreno llacer, M; Agricola, J B; Mcgovern, S A; Sakurai, Y; Trigger, I M; Qing, D; De silva, A S; Butin, F; Dell'acqua, A; Hawkings, R J; Lamanna, M; Mapelli, L; Passardi, G; Rembser, C; Tremblet, L; Andreazza, W; Dobos, D A; Koblitz, B; Bianco, M; Dimitrov, G V; Schlenker, S; Armbruster, A J; Rammensee, M C; Romao rodrigues, L F; Peters, K; Pozo astigarraga, M E; Yi, Y; Desch, K K; Huegging, F G; Muller, K K; Stillings, J A; Schaetzel, S; Xella, S; Hansen, J D; Colas, J; Daguin, G; Wingerter, I; Ionescu, G D; Ledroit, F; Lucotte, A; Clement, B E; Stark, J; Clemens, J; Djama, F; Knoops, E; Coadou, Y; Vigeolas-choury, E; Feligioni, L; Iconomidou-fayard, L; Imbert, P; Schaffer, A C; Nikolic, I; Trincaz-duvoid, S; Warin, P; Camard, A F; Ridel, M; Pires, S; Giacobbe, B; Spighi, R; Villa, M; Negrini, M; Sato, K; Gavrilenko, I; Akimov, A; Khovanskiy, V; Talyshev, A; Voronkov, A; Hakobyan, H; Mallik, U; Shibata, A; Konoplich, R; Barklow, T L; Koi, T; Straessner, A; Stelzer, B; Robertson, S H; Vachon, B; Stoebe, M; Keyes, R A; Wang, K; Billoud, T R V; Strickland, V; Batygov, M; Krieger, P; Palacino caviedes, G D; Gay, C W; Jiang, Y; Han, L; Liu, M; Zenis, T; Lokajicek, M; Staroba, P; Tasevsky, M; Popule, J; Svatos, M; Seifert, F; Landgraf, U; Lai, S T; Schmitt, K H; Achenbach, R; Schuh, N; Kiesling, C; Macchiolo, A; Nisius, R; Schacht, P; Von der schmitt, J G; Kortner, O; Atlay, N B; Segura sole, E; Grinstein, S; Neissner, C; Bruckner, D M; Oliver garcia, E; Boonekamp, M; Perrin, P; Gaillot, F M; Wilson, J A; Thomas, J P; Thompson, P D; Palmer, J D; Falk, I E; Chavez barajas, C A; Sutton, M R; Robinson, D; Kaneti, S A; Wu, T; Robson, A; Shaw, C; Buzatu, A; Qin, G; Jones, R; Bouhova-thacker, E V; Viehhauser, G; Weidberg, A R; Gilbert, L; Johansson, P D C; Orphanides, M; Vlachos, S; Behar harpaz, S; Papish, O; Lellouch, D J H; Turgeman, D; Benary, O; La rotonda, L; Vena, R; Tarasio, A; Marzano, F; Gabrielli, A; Di stante, L; Liberti, B; Aielli, G; Oda, S; Nozaki, M; Takeda, H; Hayakawa, T; Miyazaki, K; Maeda, J; Sugimoto, T; Pettersson, N E; Bentvelsen, S; Groenstege, H L; Lipniacka, A; Vahabi, M; Ould-saada, F; Chwastowski, J J; Hajduk, Z; Kaczmarska, A; Olszowska, J B; Trzupek, A; Staszewski, R P; Palka, M; Constantinescu, S; Jarlskog, G; Lundberg, B L A; Pearce, M; Ellert, M F; Bannikov, A; Fechtchenko, A; Iambourenko, V; Kukhtin, V; Pozdniakov, V; Topilin, N; Vorozhtsov, S; Khassanov, A; Fliaguine, V; Kharchenko, D; Nikolaev, K; Kotenov, K; Kozhin, A; Zenin, A; Ivashin, A; Golubkov, D; Beddall, A; Su, D; Dallapiccola, C J; Cranshaw, J M; Price, L; Stanek, R W; Gieraltowski, G; Zhang, J; Gilchriese, M; Shapiro, M; Ahlen, S; Morii, M; Taylor, F E; Miller, R J; Phillips, F H; Torrence, E C; Wheeler, S J; Benedict, B H; Napier, A; Hamilton, S F; Petrescu, T A; Boyd, G R J; Jayasinghe, A L; Smith, J M; Mc carthy, R L; Adams, D L; Le vine, M J; Zhao, X; Patwa, A M; Baker, M; Kirsch, L; Krstic, J; Simic, L; Filipcic, A; Seidel, S C; Cantore-cavalli, D; Baroncelli, A; Kind, O M; Scarcella, M J; Maidantchik, C L L; Seixas, J; Balabram filho, L E; Vorobel, V; Spousta, M; Strachota, P; Vokac, P; Slavicek, T; Bergmann, B L; Biebel, O; Kersten, S; Srinivasan, M; Trefzger, T; Vazeille, F; Insa, C; Kirk, J; Middleton, R; Burke, S; Klein, U; Morris, J D; Ellis, K V; Millward, L R; Giokaris, N; Ioannou, P; Angelidakis, S; Bouzakis, K; Andreazza, A; Perini, L; Chtcheguelski, V; Spiridenkov, E; Yilmaz, M; Kaya, U; Ernst, J; Mahmood, A; Saland, J; Kutnink, T; Holler, J; Kagan, H P; Wang, C; Pan, Y; Xu, N; Ji, H; Willis, W J; Tuts, P M; Litke, A; Wilder, M; Rothberg, J; Twomey, M S; Rizatdinova, F; Loch, P; Rutherfoord, J P; Varnes, E W; Barberis, D; Osculati-becchi, B; Brandt, A G; Turvey, A J; Benchekroun, D; Nagasaka, Y; Thanakornworakij, T; Quadt, A; Nadal serrano, J; Magradze, E; Nackenhorst, O; Musheghyan, H; Kareem, M; Chytka, L; Perez codina, E; Stelzer-chilton, O; Brunel, B; Henriques correia, A M; Dittus, F; Hatch, M; Haug, F; Hauschild, M; Huhtinen, M; Lichard, P; Schuh-erhard, S; Spigo, G; Avolio, G; Tsarouchas, C; Ahmad, I; Backes, M P; Barisits, M; Gadatsch, S; Cerv, M; Sicoe, A D; Nattamai sekar, L P; Fazio, D; Shan, L; Sun, X; Gaycken, G F; Hemperek, T; Petersen, T C; Alonso diaz, A; Moynot, M; Werlen, M; Hryn'ova, T; Gallin-martel, M; Wu, M; Touchard, F; Menouni, M; Fougeron, D; Le guirriec, E; Chollet, J C; Veillet, J; Barrillon, P; Prat, S; Krasny, M W; Roos, L; Boudarham, G; Lefebvre, G; Boscherini, D; Valentinetti, S; Acharya, B S; Miglioranzi, S; Kanzaki, J; Unno, Y; Yasu, Y; Iwasaki, H; Tokushuku, K; Maio, A; Rodrigues fernandes, B J; Pinto figueiredo raimundo ribeiro, N M; Bot, A; Shmeleva, A; Zaidan, R; Djilkibaev, R; Mincer, A I; Salnikov, A; Aracena, I A; Schwartzman, A G; Silverstein, D J; Fulsom, B G; Anulli, F; Kuhn, D; White, M J; Vetterli, M J; Stockton, M C; Mantifel, R L; Azuelos, G; Shoaleh saadi, D; Savard, P; Clark, A; Ferrere, D; Gaumer, O P; Diaz gutierrez, M A; Liu, Y; Dubnickova, A; Sykora, I; Strizenec, P; Weichert, J; Zitek, K; Naumann, T; Goessling, C; Klingenberg, R; Jakobs, K; Rurikova, Z; Werner, M W; Arnold, H R; Buscher, D; Hanke, P; Stamen, R; Dietzsch, T A; Kiryunin, A; Salihagic, D; Buchholz, P; Pacheco pages, A; Sushkov, S; Porto fernandez, M D C; Cruz josa, R; Vos, M A; Schwindling, J; Ponsot, P; Charignon, C; Kivernyk, O; Goodrick, M J; Hill, J C; Green, B J; Quarman, C V; Bates, R L; Allwood-spiers, S E; Quilty, D; Chilingarov, A; Long, R E; Barton, A E; Konstantinidis, N; Simmons, B; Davison, A R; Christodoulou, V; Wastie, R L; Gallas, E J; Cox, J; Dehchar, M; Behr, J K; Pickering, M A; Filippas, A; Panagoulias, I; Tenenbaum katan, Y D; Roth, I; Pitt, M; Citron, Z H; Benhammou, Y; Amram, N Y N; Soffer, A; Gorodeisky, R; Antonelli, M; Chiarella, V; Curatolo, M; Esposito, B; Nicoletti, G; Martini, A; Sansoni, A; Carlino, G; Del prete, T; Bini, C; Vari, R; Kuna, M; Pinamonti, M; Itoh, Y; Colijn, A P; Klous, S; Garitaonandia elejabarrieta, H; Rosendahl, P L; Taga, A V; Malecki, P; Malecki, P; Wolter, M W; Kowalski, T; Korcyl, G M; Caprini, M; Caprini, I; Dita, P; Olariu, A; Tudorache, A; Lytken, E; Hidvegi, A; Aliyev, M; Alexeev, G; Bardin, D; Kakurin, S; Lebedev, A; Golubykh, S; Chepurnov, V; Gostkin, M; Kolesnikov, V; Karpova, Z; Davkov, K I; Yeletskikh, I; Grishkevich, Y; Rud, V; Myagkov, A; Nikolaenko, V; Starchenko, E; Zaytsev, A; Fakhrutdinov, R; Cheine, I; Istin, S; Sahin, S; Teng, P; Chu, M L; Trilling, G H; Heinemann, B; Richoz, N; Degeorge, C; Youssef, S; Pilcher, J; Cheng, Y; Purohit, M V; Kravchenko, A; Calkins, R E; Blazey, G; Hauser, R; Koll, J D; Reinsch, A; Brost, E C; Allen, B W; Lankford, A J; Ciobotaru, M D; Slagle, K J; Haffa, B; Mann, A; Loginov, A; Cummings, J T; Loyal, J D; Skubic, P L; Boudreau, J F; Lee, B E; Redlinger, G; Wlodek, T; Carcassi, G; Sexton, K A; Yu, D; Deng, W; Metcalfe, J E; Panitkin, S; Sijacki, D; Mikuz, M; Kramberger, G; Tartarelli, G F; Farilla, A; Stanescu, C; Herrberg, R; Alconada verzini, M J; Brennan, A J; Varvell, K; Marroquim, F; Gomes, A A; Do amaral coutinho, Y; Gingrich, D; Moore, R W; Dolejsi, J; Valkar, S; Broz, J; Jindra, T; Kohout, Z; Kral, V; Mann, A W; Calfayan, P P; Langer, T; Hamacher, K; Sanny, B; Wagner, W; Flick, T; Redelbach, A R; Ke, Y; Higon-rodriguez, E; Donini, J N; Lafarguette, P; Adye, T J; Baines, J; Barnett, B; Wickens, F J; Martin, V J; Jackson, J N; Prichard, P; Kretzschmar, J; Martin, A J; Walker, C J; Potter, K M; Kourkoumelis, C; Tzamarias, S; Houiris, A G; Iliadis, D; Fanti, M; Bertolucci, F; Maleev, V; Sultanov, S; Rosenberg, E I; Krumnack, N E; Bieganek, C; Diehl, E B; Mc kee, S P; Eppig, A P; Harper, D R; Liu, C; Schwarz, T A; Mazor, B; Looper, K A; Wiedenmann, W; Huang, P; Stahlman, J M; Battaglia, M; Nielsen, J A; Zhao, T; Khanov, A; Kaushik, V S; Vichou, E; Liss, A M; Gemme, C; Morettini, P; Parodi, F; Passaggio, S; Rossi, L; Kuzhir, P; Ignatenko, A; Ferrari, R; Spairani, M; Pianori, E; Sekula, S J; Firan, A I; Cao, T; Hetherly, J W; Gouighri, M; Vassilakopoulos, V; Long, M C; Shimojima, M; Sawyer, L H; Brummett, R E; Losada, M A; Schorlemmer, A L; Mantoani, M; Bawa, H S; Mornacchi, G; Nicquevert, B; Palestini, S; Stapnes, S; Veness, R; Kotamaki, M J; Sorde, C; Iengo, P; Campana, S; Goossens, L; Zajacova, Z; Pribyl, L; Poveda torres, J; Marzin, A; Conti, G; Carrillo montoya, G D; Kroseberg, J; Gonella, L; Velz, T; Schmitt, S; Lobodzinska, E M; Lovschall-jensen, A E; Galster, G; Perrot, G; Cailles, M; Berger, N; Barnovska, Z; Delsart, P; Lleres, A; Tisserant, S; Grivaz, J; Matricon, P; Bellagamba, L; Bertin, A; Bruschi, M; De castro, S; Semprini cesari, N; Fabbri, L; Rinaldi, L; Quayle, W B; Truong, T N L; Kondo, T; Haruyama, T; Ng, C; Do valle wemans, A; Almeida veloso, F M; Konovalov, S; Ziegler, J M; Su, D; Lukas, W; Prince, S; Ortega urrego, E J; Teuscher, R J; Knecht, N; Pretzl, K; Borer, C; Gadomski, S; Koch, B; Kuleshov, S; Brooks, W K; Antos, J; Kulkova, I; Chudoba, J; Chyla, J; Tomasek, L; Bazalova, M; Messmer, I; Tobias, J; Sundermann, J E; Kuehn, S S; Kluge, E; Scharf, V L; Barillari, T; Kluth, S; Menke, S; Weigell, P; Schwegler, P; Ziolkowski, M; Casado lechuga, P M; Garcia, C; Sanchez, J; Costa mezquita, M J; Valero biot, J A; Laporte, J; Nikolaidou, R; Virchaux, M; Nguyen, V T H; Charlton, D; Harrison, K; Slater, M W; Newman, P R; Parker, A M; Ward, P; Mcgarvie, S A; Kilvington, G J; D'auria, S; O'shea, V; Mcglone, H M; Fox, H; Henderson, R; Kartvelishvili, V; Davies, B; Sherwood, P; Fraser, J T; Lancaster, M A; Tseng, J C; Hays, C P; Apolle, R; Dixon, S D; Parker, K A; Gazis, E; Papadopoulou, T; Panagiotopoulou, E; Karastathis, N; Hershenhorn, A D; Milov, A; Groth-jensen, J; Bilokon, H; Miscetti, S; Canale, V; Rebuzzi, D M; Capua, M; Bagnaia, P; De salvo, A; Gentile, S; Safai tehrani, F; Solfaroli camillocci, E; Sasao, N; Tsunada, K; Massaro, G; Magrath, C A; Van kesteren, Z; Beker, M G; Van den wollenberg, W; Bugge, L; Buran, T; Read, A L; Gjelsten, B K; Banas, E A; Turnau, J; Derendarz, D K; Kisielewska, D; Chesneanu, D; Rotaru, M; Maurer, J B; Wong, M L; Lund-jensen, B; Asman, B; Jon-and, K B; Silverstein, S B; Johansen, M; Alexandrov, I; Iatsounenko, I; Krumshteyn, Z; Peshekhonov, V; Rybaltchenko, K; Samoylov, V; Cheplakov, A; Kekelidze, G; Lyablin, M; Teterine, V; Bednyakov, V; Kruchonak, U; Shiyakova, M M; Demichev, M; Denisov, S P; Fenyuk, A; Djobava, T; Salukvadze, G; Cetin, S A; Brau, B P; Pais, P R; Proudfoot, J; Van gemmeren, P; Zhang, Q; Beringer, J A; Ely, R; Leggett, C; Pengg, F X; Barnett, M R; Quick, R E; Williams, S; Gardner jr, R W; Huston, J; Brock, R; Wanotayaroj, C; Unel, G N; Taffard, A C; Frate, M; Baker, K O; Tipton, P L; Hutchison, A; Walsh, B J; Norberg, S R; Su, J; Tsybyshev, D; Caballero bejar, J; Ernst, M U; Wellenstein, H; Vudragovic, D; Vidic, I; Gorelov, I V; Toms, K; Alimonti, G; Petrucci, F; Kolanoski, H; Smith, J; Jeng, G; Watson, I J; Guimaraes ferreira, F; Miranda vieira xavier, F; Araujo pereira, R; Poffenberger, P; Sopko, V; Elmsheuser, J; Wittkowski, J; Glitza, K; Gorfine, G W; Ferrer soria, A; Fuster verdu, J A; Sanchis lozano, A; Reinmuth, G; Busato, E; Haywood, S J; Mcmahon, S J; Qian, W; Villani, E G; Laycock, P J; Poll, A J; Rizvi, E S; Foster, J M; Loebinger, F; Forti, A; Plano, W G; Brown, G J A; Kordas, K; Vegni, G; Ohsugi, T; Iwata, Y; Cherkaoui el moursli, R; Sahin, M; Akyazi, E; Carlsen, A; Kanwal, B; Cochran jr, J H; Aronnax, M V; Lockner, M J; Zhou, B; Levin, D S; Weaverdyck, C J; Grom, G F; Rudge, A; Ebenstein, W L; Jia, B; Yamaoka, J; Jared, R C; Wu, S L; Banerjee, S; Lu, Q; Hughes, E W; Alkire, S P; Degenhardt, J D; Lipeles, E D; Spencer, E N; Savine, A; Cheu, E C; Lampl, W; Veatch, J R; Roberts, K; Atkinson, M J; Odino, G A; Polesello, G; Martin, T; White, A P; Stephens, R; Grinbaum sarkisyan, E; Vartapetian, A; Yu, J; Sosebee, M; Thilagar, P A; Spurlock, B; Bonde, R; Filthaut, F; Klok, P; Hoummada, A; Ouchrif, M; Pellegrini, G; Rafi tatjer, J M; Navarro, G A; Blumenschein, U; Weingarten, J C; Mueller, D; Graber, L; Gao, Y; Bode, A; Capeans garrido, M D M; Carli, T; Wells, P; Beltramello, O; Vuillermet, R; Dudarev, A; Salzburger, A; Torchiani, C I; Serfon, C L G; Sloper, J E; Duperrier, G; Lilova, P T; Knecht, M O; Lassnig, M; Anders, G; Deviveiros, P; Young, C; Sforza, F; Shaochen, C; Lu, F; Wermes, N; Wienemann, P; Schwindt, T; Hansen, P H; Hansen, J B; Pingel, A M; Massol, N; Elles, S L; Hallewell, G D; Rozanov, A; Vacavant, L; Fournier, D A; Poggioli, L; Puzo, P M; Tanaka, R; Escalier, M A; Makovec, N; Rezynkina, K; De cecco, S; Cavalleri, P G; Massa, I; Zoccoli, A; Tanaka, S; Odaka, S; Mitsui, S; Tomasio pina, J A; Santos, H F; Satsounkevitch, I; Harkusha, S; Baranov, S; Nechaeva, P; Kayumov, F; Kazanin, V; Asai, M; Mount, R P; Nelson, T K; Smith, D; Kenney, C J; Malone, C M; Kobel, M; Friedrich, F; Grohs, J P; Jais, W J; O'neil, D C; Warburton, A T; Vincter, M; Mccarthy, T G; Groer, L S; Pham, Q T; Taylor, W J; La marra, D; Perrin, E; Wu, X; Bell, W H; Delitzsch, C M; Feng, C; Zhu, C; Tokar, S; Bruncko, D; Kupco, A; Marcisovsky, M; Jakoubek, T; Bruneliere, R; Aktas, A; Narrias villar, D I; Tapprogge, S; Mattmann, J; Kroha, H; Crespo, J; Korolkov, I; Cavallaro, E; Cabrera urban, S; Mitsou, V; Kozanecki, W; Mansoulie, B; Pabot, Y; Etienvre, A; Bauer, F; Chevallier, F; Bouty, A R; Watkins, P; Watson, A; Faulkner, P J W; Curtis, C J; Murillo quijada, J A; Grout, Z J; Chapman, J D; Cowan, G D; George, S; Boisvert, V; Mcmahon, T R; Doyle, A T; Thompson, S A; Britton, D; Smizanska, M; Campanelli, M; Butterworth, J M; Loken, J; Renton, P; Barr, A J; Issever, C; Short, D; Crispin ortuzar, M; Tovey, D R; French, R; Rozen, Y; Alexander, G; Kreisel, A; Conventi, F; Raulo, A; Schioppa, M; Susinno, G; Tassi, E; Giagu, S; Luci, C; Nisati, A; Cobal, M; Ishikawa, A; Jinnouchi, O; Bos, K; Verkerke, W; Vermeulen, J; Van vulpen, I B; Kieft, G; Mora, K D; Olsen, F; Rohne, O M; Pajchel, K; Nilsen, J K; Wosiek, B K; Wozniak, K W; Badescu, E; Jinaru, A; Bohm, C; Johansson, E K; Sjoelin, J B R; Clement, C; Buszello, C P; Huseynova, D; Boyko, I; Popov, B; Poukhov, O; Vinogradov, V; Tsiareshka, P; Skvorodnev, N; Soldatov, A; Chuguev, A; Gushchin, V; Yazici, E; Lutz, M S; Malon, D; Vanyashin, A; Lavrijsen, W; Spieler, H; Biesiada, J L; Bahr, M; Kong, J; Tatarkhanov, M; Ogren, H; Van kooten, R J; Cwetanski, P; Butler, J M; Shank, J T; Chakraborty, D; Ermoline, I; Sinev, N; Whiteson, D O; Corso radu, A; Huang, J; Werth, M P; Kastoryano, M; Meirose da silva costa, B; Namasivayam, H; Hobbs, J D; Schamberger jr, R D; Guo, F; Potekhin, M; Popovic, D; Gorisek, A; Sokhrannyi, G; Hofsajer, I W; Mandelli, L; Ceradini, F; Graziani, E; Giorgi, F; Zur nedden, M E G; Grancagnolo, S; Volpi, M; Nunes hanninger, G; Rados, P K; Milesi, M; Cuthbert, C J; Black, C W; Fink grael, F; Fincke-keeler, M; Keeler, R; Kowalewski, R V; Berghaus, F O; Qi, M; Davidek, T; Tas, P; Jakubek, J; Duckeck, G; Walker, R; Mitterer, C A; Harenberg, T; Sandvoss, S A; Del peso, J; Llorente merino, J; Gonzalez millan, V; Irles quiles, A; Crouau, M; Gris, P L Y; Liauzu, S; Romano saez, S M; Gallop, B J; Jones, T J; Austin, N C; Morris, J; Duerdoth, I; Thompson, R J; Kelly, M P; Leisos, A; Garas, A; Pizio, C; Venda pinto, B A; Kudin, L; Qian, J; Wilson, A W; Mietlicki, D; Long, J D; Sang, Z; Arms, K E; Rahimi, A M; Moss, J J; Oh, S H; Parker, S I; Parsons, J; Cunitz, H; Vanguri, R S; Sadrozinski, H; Lockman, W S; Martinez-mc kinney, G; Goussiou, A; Jones, A; Lie, K; Hasegawa, Y; Olcese, M; Gilewsky, V; Harrison, P F; Janus, M; Spangenberg, M; De, K; Ozturk, N; Pal, A K; Darmora, S; Bullock, D J; Oviawe, O; Derkaoui, J E; Rahal, G; Sircar, A; Frey, A S; Stolte, P; Rosien, N; Zoch, K; Li, L; Schouten, D W; Catinaccio, A; Ciapetti, M; Delruelle, N; Ellis, N; Farthouat, P; Hoecker, A; Klioutchnikova, T; Macina, D; Malyukov, S; Spiwoks, R D; Unal, G P; Vandoni, G; Petersen, B A; Pommes, K; Nairz, A M; Wengler, T; Mladenov, D; Solans sanchez, C A; Lantzsch, K; Schmieden, K; Jakobsen, S; Ritsch, E; Sciuccati, A; Alves dos santos, A M; Ouyang, Q; Zhou, M; Brock, I C; Janssen, J; Katzy, J; Anders, C F; Nilsson, B S; Bazan, A; Di ciaccio, L; Yildizkaya, T; Collot, J; Malek, F; Trocme, B S; Breugnon, P; Godiot, S; Adam bourdarios, C; Coulon, J; Duflot, L; Petroff, P G; Zerwas, D; Lieuvin, M; Calderini, G; Laporte, D; Ocariz, J; Gabrielli, A; Ohska, T K; Kurochkin, Y; Kantserov, V; Vasilyeva, L; Speransky, M; Smirnov, S; Antonov, A; Bulekov, O; Tikhonov, Y; Sargsyan, L; Vardanyan, G; Budick, B; Kocian, M L; Luitz, S; Young, C C; Grenier, P J; Kelsey, M; Black, J E; Kneringer, E; Jussel, P; Horton, A J; Beaudry, J; Chandra, A; Ereditato, A; Topfel, C M; Mathieu, R; Bucci, F; Muenstermann, D; White, R M; He, M; Urban, J; Straka, M; Vrba, V; Schumacher, M; Parzefall, U; Mahboubi, K; Sommer, P O; Koepke, L H; Bethke, S; Moser, H; Wiesmann, M; Walkowiak, W A; Fleck, I J; Martinez-perez, M; Sanchez sanchez, C A; Jorgensen roca, S; Accion garcia, E; Sainz ruiz, C A; Valls ferrer, J A; Amoros vicente, G; Vives torrescasana, R; Ouraou, A; Formica, A; Hassani, S; Watson, M F; Cottin buracchio, G F; Bussey, P J; Saxon, D; Ferrando, J E; Collins-tooth, C L; Hall, D C; Cuhadar donszelmann, T; Dawson, I; Duxfield, R; Argyropoulos, T; Brodet, E; Livneh, R; Shougaev, K; Reinherz, E I; Guttman, N; Beretta, M M; Vilucchi, E; Aloisio, A; Patricelli, S; Caprio, M; Cevenini, F; De vecchi, C; Livan, M; Rimoldi, A; Vercesi, V; Ayad, R; Mastroberardino, A; Ciapetti, G; Luminari, L; Rescigno, M; Santonico, R; Salamon, A; Del papa, C; Kurashige, H; Homma, Y; Tomoto, M; Horii, Y; Sugaya, Y; Hanagaki, K; Bobbink, G; Kluit, P M; Koffeman, E N; Van eijk, B; Lee, H; Eigen, G; Dorholt, O; Strandlie, A; Strzempek, P B; Dita, S; Stoicea, G; Chitan, A; Leven, S S; Moa, T; Brenner, R; Ekelof, T J C; Olshevskiy, A; Roumiantsev, V; Chlachidze, G; Zimine, N; Gusakov, Y; Grigalashvili, N; Mineev, M; Potrap, I; Barashkou, A; Shoukavy, D; Shaykhatdenov, B; Pikelner, A; Gladilin, L; Ammosov, V; Abramov, A; Arik, M; Sahinsoy, M; Uysal, Z; Azizi, K; Hotinli, S C; Zhou, S; Berger, E; Blair, R; Underwood, D G; Einsweiler, K; Garcia-sciveres, M A; Siegrist, J L; Kipnis, I; Dahl, O; Holland, S; Barbaro galtieri, A; Smith, P T; Parua, N; Franklin, M; Mercurio, K M; Tong, B; Pod, E; Cole, S G; Hopkins, W H; Guest, D H; Severini, H; Marsicano, J J; Abbott, B K; Wang, Q; Lissauer, D; Ma, H; Takai, H; Rajagopalan, S; Protopopescu, S D; Snyder, S S; Undrus, A; Popescu, R N; Begel, M A; Blocker, C A; Amelung, C; Mandic, I; Macek, B; Tucker, B H; Citterio, M; Troncon, C; Orestano, D; Taccini, C; Romeo, G L; Dova, M T; Taylor, G N; Gesualdi manhaes, A; Mcpherson, R A; Sobie, R; Taylor, R P; Dolezal, Z; Kodys, P; Slovak, R; Sopko, B; Vacek, V; Sanders, M P; Hertenberger, R; Meineck, C; Becks, K; Kind, P; Sandhoff, M; Cantero garcia, J; De la torre perez, H; Castillo gimenez, V; Ros, E; Hernandez jimenez, Y; Chadelas, R; Santoni, C; Washbrook, A J; O'brien, B J; Wynne, B M; Mehta, A; Vossebeld, J H; Landon, M; Teixeira dias castanheira, M; Cerrito, L; Keates, J R; Fassouliotis, D; Chardalas, M; Manousos, A; Grachev, V; Seliverstov, D; Sedykh, E; Cakir, O; Ciftci, R; Edson, W; Prell, S A; Rosati, M; Stroman, T; Jiang, H; Neal, H A; Li, X; Gan, K K; Smith, D S; Kruse, M C; Ko, B R; Leung fook cheong, A M; Cole, B; Angerami, A R; Greene, Z S; Kroll, J I; Van berg, R P; Forbush, D A; Lubatti, H; Raisher, J; Shupe, M A; Wolin, S; Oshita, H; Gaudio, G; Das, R; Konig, A C; Croft, V A; Harvey, A; Maaroufi, F; Melo, I; Greenwood jr, Z D; Shabalina, E; Mchedlidze, G; Drechsler, E; Rieger, J K; Blackston, M; Colombo, T

    2002-01-01

    % ATLAS \\\\ \\\\ ATLAS is a general-purpose experiment for recording proton-proton collisions at LHC. The ATLAS collaboration consists of 144 participating institutions (June 1998) with more than 1750~physicists and engineers (700 from non-Member States). The detector design has been optimized to cover the largest possible range of LHC physics: searches for Higgs bosons and alternative schemes for the spontaneous symmetry-breaking mechanism; searches for supersymmetric particles, new gauge bosons, leptoquarks, and quark and lepton compositeness indicating extensions to the Standard Model and new physics beyond it; studies of the origin of CP violation via high-precision measurements of CP-violating B-decays; high-precision measurements of the third quark family such as the top-quark mass and decay properties, rare decays of B-hadrons, spectroscopy of rare B-hadrons, and $ B ^0 _{s} $-mixing. \\\\ \\\\The ATLAS dectector, shown in the Figure includes an inner tracking detector inside a 2~T~solenoid providing an axial...

  4. Supporting ATLAS

    CERN Multimedia

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator. The installation of the feet is scheduled to finish during January 2004 with an installation precision at the 1 mm level despite their height of 5.3 metres. The manufacture was carried out in Russia (Company Izhorskiye Zavody in St. Petersburg), as part of a Russian and JINR Dubna in-kind contribution to ATLAS. Involved in the installation is a team from IHEP-Protvino (Russia), the ATLAS technical co-ordination team at CERN, and the CERN survey team. In all, about 15 people are involved. After the feet are in place, the barrel toroid magnet and the barrel calorimeters will be installed. This will keep the ATLAS team busy for the entire year 2004.

  5. 17 April 2008 - Head of Internal Audit Network meeting visiting the ATLAS experimental area with CERN ATLAS Team Leader P. Fassnacht, ATLAS Technical Coordinator M. Nessi and ATLAS Resources Manager M. Nordberg.

    CERN Multimedia

    Mona Schweizer

    2008-01-01

    17 April 2008 - Head of Internal Audit Network meeting visiting the ATLAS experimental area with CERN ATLAS Team Leader P. Fassnacht, ATLAS Technical Coordinator M. Nessi and ATLAS Resources Manager M. Nordberg.

  6. Trigger Menu-aware Monitoring for the ATLAS experiment

    Science.gov (United States)

    Hoad, Xanthe; ATLAS Collaboration

    2017-10-01

    We present a“trigger menu-aware” monitoring system designed for the Run-2 data-taking of the ATLAS experiment at the LHC. Unlike Run-1, where a change in the trigger menu had to be matched by the installation of a new software release at Tier-0, the new monitoring system aims to simplify the ATLAS operational workflows. This is achieved by integrating monitoring updates in a quick and flexible manner via an Oracle DB interface. We present the design and the implementation of the menu-aware monitoring, along with lessons from the operational experience of the new system with the 2016 collision data.

  7. Mesure des champs de radiation dans le detecteur ATLAS et sa caverne avec les detecteurs au silicium a pixels ATLAS-MPX

    Science.gov (United States)

    Bouchami, Jihene

    -MPX devices response and the luminosity are correlated, the results of measuring radiation levels are expressed in terms of particle fluences per unit integrated luminosity. A significant deviation has been obtained when comparing these fluences with those predicted by GCALOR, which is one of the ATLAS detector simulations. In addition, radiation measurements performed at the end of proton-proton collisions have demonstrated that the decay of radionuclides produced during collisions can be observed with the ATLAS-MPX devices. The residual activation of ATLAS components can be measured with these devices by means of ambient dose equivalent calibration. Keywords: pattern recognition, charge sharing effect, neutron detection efficiency, luminosity, van der Meer method, particle fluences, GCALOR simulation, residual activation, ambient dose equivalent.

  8. MBAT: A scalable informatics system for unifying digital atlasing workflows

    Directory of Open Access Journals (Sweden)

    Sane Nikhil

    2010-12-01

    Full Text Available Abstract Background Digital atlases provide a common semantic and spatial coordinate system that can be leveraged to compare, contrast, and correlate data from disparate sources. As the quality and amount of biological data continues to advance and grow, searching, referencing, and comparing this data with a researcher's own data is essential. However, the integration process is cumbersome and time-consuming due to misaligned data, implicitly defined associations, and incompatible data sources. This work addressing these challenges by providing a unified and adaptable environment to accelerate the workflow to gather, align, and analyze the data. Results The MouseBIRN Atlasing Toolkit (MBAT project was developed as a cross-platform, free open-source application that unifies and accelerates the digital atlas workflow. A tiered, plug-in architecture was designed for the neuroinformatics and genomics goals of the project to provide a modular and extensible design. MBAT provides the ability to use a single query to search and retrieve data from multiple data sources, align image data using the user's preferred registration method, composite data from multiple sources in a common space, and link relevant informatics information to the current view of the data or atlas. The workspaces leverage tool plug-ins to extend and allow future extensions of the basic workspace functionality. A wide variety of tool plug-ins were developed that integrate pre-existing as well as newly created technology into each workspace. Novel atlasing features were also developed, such as supporting multiple label sets, dynamic selection and grouping of labels, and synchronized, context-driven display of ontological data. Conclusions MBAT empowers researchers to discover correlations among disparate data by providing a unified environment for bringing together distributed reference resources, a user's image data, and biological atlases into the same spatial or semantic context

  9. Large Scale Software Building with CMake in ATLAS

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration; Obreshkov, Emil; Undrus, Alexander

    2016-01-01

    The offline software of the ATLAS experiment at the LHC (Large Hadron Collider) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector trigger system to select LHC collision events during data taking. ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the mentioned software packages. This also makes it possible to develop and test new and modifi...

  10. Large scale software building with CMake in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00218447; The ATLAS collaboration; Elmsheuser, Johannes; Obreshkov, Emil; Undrus, Alexander

    2017-01-01

    The offline software of the ATLAS experiment at the LHC (Large Hadron Collider) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector trigger system to select LHC collision events during data taking. ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the mentioned software packages. This also makes it possible to develop and test new and modifi...

  11. Common support and integration of the BMS/BMF type MDT/RPC chambers of the muon spectrometer of the ATLAS experiment

    International Nuclear Information System (INIS)

    Barashkov, A.V.; Glonti, G.L.; Gongadze, A.L.; Gostkin, M.I.; Gus'kov, A.V.; Dedovich, D.V.; Demichev, M.A.; Zhemchugov, A.S.; Il'yushenko, E.N.; Kotov, S.A.; Korolevich, Ya.V.; Kruchonok, V.G.; Krumshtejn, Z.V.; Kuznetsov, N.K.; Lomidze, D.D.; Potrap, I.N.; Kharchenko, D.V.; Tskhadadze, Eh.G.; Chepurnov, V.F.; Shelkov, G.A.; Podkladkin, S.Yu.; Sekhniaidze, G.G.

    2005-01-01

    The common support system for muon BMS/BMF drift chambers with trigger RPC chambers for the muon spectrometer of the ATLAS experiment is described. The support systems are intended for the chambers integration into combined modules and for the subsequent installation in the experimental set-up. The technology of chambers integration is described. The sagging of the drift chambers was tested by tilting the modules at different angles. The measurements were performed by means of the RASNIK optical system. The normal operation of kinematic supports was confirmed. We also present the method of the sag regulation for the BMS/BMF chambers lying in the horizontal plane which provides the minimum difference between signal wire and detector tube body sags when the modules are later installed in their working positions

  12. ATLAS ITk and new pixel sensors technologies

    CERN Document Server

    Gaudiello, A

    2016-01-01

    During the 2023–2024 shutdown, the Large Hadron Collider (LHC) will be upgraded to reach an instantaneous luminosity up to 7×10$^{34}$ cm$^{−2}$s$^{−1}$. This upgrade of the accelerator is called High-Luminosity LHC (HL-LHC). The ATLAS detector will be changed to meet the challenges of HL-LHC: an average of 200 pile-up events in every bunch crossing, and an integrated luminosity of 3000 fb $^{−1}$ over ten years. The HL-LHC luminosity conditions are too extreme for the current silicon (pixel and strip) detectors and straw tube transition radiation tracker (TRT) of the current ATLAS tracking system. Therefore the ATLAS inner tracker is being completely rebuilt for data-taking and the new system is called Inner Tracker (ITk). During this upgrade the TRT will be removed in favor of an all-new all-silicon tracker composed only by strip and pixel detectors. An overview of new layouts in study will be reported and the new pixel sensor technologies in development will be explained.

  13. A Prototype Ontology Tool and Interface for Coastal Atlas Interoperability

    Science.gov (United States)

    Wright, D. J.; Bermudez, L.; O'Dea, L.; Haddad, T.; Cummins, V.

    2007-12-01

    While significant capacity has been built in the field of web-based coastal mapping and informatics in the last decade, little has been done to take stock of the implications of these efforts or to identify best practice in terms of taking lessons learned into consideration. This study reports on the second of two transatlantic workshops that bring together key experts from Europe, the United States and Canada to examine state-of-the-art developments in coastal web atlases (CWA), based on web enabled geographic information systems (GIS), along with future needs in mapping and informatics for the coastal practitioner community. While multiple benefits are derived from these tailor-made atlases (e.g. speedy access to multiple sources of coastal data and information; economic use of time by avoiding individual contact with different data holders), the potential exists to derive added value from the integration of disparate CWAs, to optimize decision-making at a variety of levels and across themes. The second workshop focused on the development of a strategy to make coastal web atlases interoperable by way of controlled vocabularies and ontologies. The strategy is based on web service oriented architecture and an implementation of Open Geospatial Consortium (OGC) web services, such as Web Feature Services (WFS) and Web Map Service (WMS). Atlases publishes Catalog Web Services (CSW) using ISO 19115 metadata and controlled vocabularies encoded as Uniform Resource Identifiers (URIs). URIs allows the terminology of each atlas to be uniquely identified and facilitates mapping of terminologies using semantic web technologies. A domain ontology was also created to formally represent coastal erosion terminology as a use case, and with a test linkage of those terms between the Marine Irish Digital Atlas and the Oregon Coastal Atlas. A web interface is being developed to discover coastal hazard themes in distributed coastal atlases as part of a broader International Coastal

  14. ATLAS Live: Collaborative Information Streams

    CERN Document Server

    Goldfarb, S; The ATLAS collaboration

    2011-01-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at th...

  15. ATLAS Live: Collaborative Information Streams

    CERN Document Server

    Goldfarb, S; The ATLAS collaboration

    2010-01-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using the SCALA digital signage software system. The system is robust and flexible, allowing for the usage of scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intrascreen divisibility. The video is made available to the collaboration or public through the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video t...

  16. Measurement of the W boson mass with the ATLAS detector

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00408270

    This thesis describes a measurement of the W boson mass with the ATLAS detector based on the data-set recorded by ATLAS in 2011 at a centre-of-mass energy of 7 TeV, and corresponding to 4.6 inverse femtobarn of integrated luminosity. Measurements are performed through template fits to the transverse momentum distributions of charged leptons and to transverse mass distributions of the W boson, in electron and muon decay modes in various kinematic categories. The individual measurements are found to be consistent and their combination leads to a value of \\begin{eqnarray} \

  17. Probabilistic liver atlas construction.

    Science.gov (United States)

    Dura, Esther; Domingo, Juan; Ayala, Guillermo; Marti-Bonmati, Luis; Goceri, E

    2017-01-13

    Anatomical atlases are 3D volumes or shapes representing an organ or structure of the human body. They contain either the prototypical shape of the object of interest together with other shapes representing its statistical variations (statistical atlas) or a probability map of belonging to the object (probabilistic atlas). Probabilistic atlases are mostly built with simple estimations only involving the data at each spatial location. A new method for probabilistic atlas construction that uses a generalized linear model is proposed. This method aims to improve the estimation of the probability to be covered by the liver. Furthermore, all methods to build an atlas involve previous coregistration of the sample of shapes available. The influence of the geometrical transformation adopted for registration in the quality of the final atlas has not been sufficiently investigated. The ability of an atlas to adapt to a new case is one of the most important quality criteria that should be taken into account. The presented experiments show that some methods for atlas construction are severely affected by the previous coregistration step. We show the good performance of the new approach. Furthermore, results suggest that extremely flexible registration methods are not always beneficial, since they can reduce the variability of the atlas and hence its ability to give sensible values of probability when used as an aid in segmentation of new cases.

  18. ATLAS database application enhancements using Oracle 11g

    International Nuclear Information System (INIS)

    Dimitrov, G; Canali, L; Blaszczyk, M; Sorokoletov, R

    2012-01-01

    The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, condition data replication to remote sites. Oracle Relational Database Management System (RDBMS) has been addressing the ATLAS database requirements to a great extent for many years. Ten database clusters are currently deployed for the needs of the different applications, divided in production, integration and standby databases. The data volume, complexity and demands from the users are increasing steadily with time. Nowadays more than 20 TB of data are stored in the ATLAS production Oracle databases at CERN (not including the index overhead), but the most impressive number is the hosted 260 database schemes (for the most common case each schema is related to a dedicated client application with its own requirements). At the beginning of 2012 all ATLAS databases at CERN have been upgraded to the newest Oracle version at the time: Oracle 11g Release 2. Oracle 11g come with several key improvements compared to previous database engine versions. In this work we present our evaluation of the most relevant new features of Oracle 11g of interest for ATLAS applications and use cases. Notably we report on the performance and scalability enhancements obtained in production since the Oracle 11g deployment during Q1 2012 and we outline plans for future work in this area.

  19. Report to users of ATLAS

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1995-05-01

    This report contains discussing in the following areas: Status of the Atlas accelerator; highlights of recent research at Atlas; concept for an advanced exotic beam facility based on Atlas; program advisory committee; Atlas executive committee; and Atlas and ANL physics division on the world wide web

  20. Federating Distributed Storage For Clouds In ATLAS

    CERN Document Server

    Berghaus, Frank; The ATLAS collaboration

    2017-01-01

    Input data for applications that run in cloud computing centres can be stored at distant repositories, often with multiple copies of the popular data stored at many sites. Locating and retrieving the remote data can be challenging, and we believe that federating the storage can address this problem. A federation would locate the closest copy of the data currently on the basis of GeoIP information. Currently we are using the DynaFed data federation software solution developed by CERN IT. DynaFed supports several industry standards for connection protocols like Amazon's S3, Microsofts Azure, as well as WebDav and HTTP. Protocol dependent authentication is hidden from the user by using their X509 certificate. We have setup an instance of DynaFed and integrated it into the ATLAS Data Distribution Management system. We report on the challenges faced during the installation and integration. We have tested ATLAS analysis jobs submitted by the PanDA production system and we report on our first experiences with its op...

  1. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  2. Xcache in the ATLAS Distributed Computing Environment

    CERN Document Server

    Hanushevsky, Andrew; The ATLAS collaboration

    2018-01-01

    Built upon the Xrootd Proxy Cache (Xcache), we developed additional features to adapt the ATLAS distributed computing and data environment, especially its data management system RUCIO, to help improve the cache hit rate, as well as features that make the Xcache easy to use, similar to the way the Squid cache is used by the HTTP protocol. We are optimizing Xcache for the HPC environments, and adapting the HL-LHC Data Lakes design as its component for data delivery. We packaged the software in CVMFS, in Docker and Singularity containers in order to standardize the deployment and reduce the cost to resolve issues at remote sites. We are also integrating it into RUCIO as a volatile storage systems, and into various ATLAS workflow such as user analysis,

  3. The Locomotive is running full speed in the ATLAS MUONs

    CERN Multimedia

    Mikenberg, G.

    The ATLAS MUON Spectrometer is, like most of the ATLAS systems, a large collection of detectors that operate at the limit of the technology. They have to provide the MUON trigger for the ATLAS detector over very large surfaces (7000m2) and measure the passage of MUONs over distances ranging between 5 to 13m, with relative precisions between the various measurement planes of few tenths of microns, while controlling various external parameters ranging from the relative positions of the detectors (alignment systems controlled to the level of 20 microns) to the magnetic field (to be reconstructed at the level of 20 Gauss). Although many of the integration problems with the rest of the ATLAS detectors have not been fully clarified, one needs to start production, in order to be ready on time to enjoy the Physics of the LHC. This means to start the coordinated work in more than 25 production and testing sites, located all around the world, that have to produce precision detectors at industrial speed, which sho...

  4. ATLAS-AWS

    International Nuclear Information System (INIS)

    Gehrcke, Jan-Philip; Stonjek, Stefan; Kluth, Stefan

    2010-01-01

    We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image on the Amazon Simple Storage Service (S3) and can be quickly retrieved and connected to new SL4 AMI instances using the Amazon Elastic Block Store (EBS). ATLAS jobs can then configure against the release kit using the ATLAS configuration management tool (cmt) in the standard way. The output of jobs is exported to S3 before the SL4 AMI is terminated. Job status information is transferred to the Amazon SimpleDB service. The whole process of launching instances of our AMI, starting, monitoring and stopping jobs and retrieving job output from S3 is controlled from a client machine using python scripts implementing the Amazon EC2/S3 API via the boto library working together with small scripts embedded in the SL4 AMI. We report our experience with setting up and operating the system using standard ATLAS job transforms.

  5. EnviroAtlas

    Data.gov (United States)

    City and County of Durham, North Carolina — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web...

  6. Modeling Radiation Damage Effects in 3D Pixel Digitization for the ATLAS Detector

    CERN Document Server

    Giugliarelli, Gilberto; The ATLAS collaboration

    2018-01-01

    Silicon Pixel detectors are at the core of the current and planned upgrade of the ATLAS experiment. They constitute the part of ATLAS closest to the interaction point and for this reason they will be exposed – over their lifetime – to a significant amount of radiation: prior to the HL-LHC, the innermost layers will receive a fluence of 10^15 neq/cm2 and their HL–LHC upgrades will have to cope with an order of magnitude higher fluence integrated over their lifetimes. This poster presents the details of a new digitization model that includes radiation damage effects to the 3D Pixel sensors for the ATLAS Detector.

  7. ATLAS copies its first PetaByte out of CERN

    CERN Multimedia

    M. Branco; P. Salgado; L. Goossens; A. Nairz

    2006-01-01

    On 6th August ATLAS reached a major milestone for its Distributed Data Management project - copying its first PetaByte (1015 Bytes) of data out from CERN to computing centers around the world. This achievement is part of the so-called 'Tier-0 exercise' running since 19th June, where simulated fake data is used to exercise the expected data flow within the CERN computing centre and out over the Grid to the Tier-1 computing centers as would happen during the real data taking. The expected rate of data output from CERN when the detector is running at full trigger rate is 780 MB/s shared among 10 external Tier-1 sites(*), amounting to around 8 PetaBytes per year. The idea of the exercise was to try to reach this data rate and sustain it for as long as possible. The exercise was run as part of the LCG's Service Challenges and allowed ATLAS to test successfully the integration of ATLAS software with the LCG middleware services that are used for low level cataloging and the actual data movement. When ATLAS is produ...

  8. Dear ATLAS colleagues,

    CERN Multimedia

    PH Department

    2008-01-01

    We are collecting old pairs of glasses to take out to Mali, where they can be re-used by people there. The price for a pair of glasses can often exceed 3 months salary, so they are prohibitively expensive for many people. If you have any old spectacles you can donate, please put them in the special box in the ATLAS secretariat, bldg.40-4-D01 before the Christmas closure on 19 December so we can take them with us when we leave for Africa at the end of the month. (more details in ATLAS e-news edition of 29 September 2008: http://atlas-service-enews.web.cern.ch/atlas-service-enews/news/news_mali.php) many thanks! Katharine Leney co-driver of the ATLAS car on the Charity Run to Mali

  9. A programmatic view of metadata, metadata services, and metadata flow in ATLAS

    International Nuclear Information System (INIS)

    Malon, D; Albrand, S; Gallas, E; Stewart, G

    2012-01-01

    The volume and diversity of metadata in an experiment of the size and scope of ATLAS are considerable. Even the definition of metadata may seem context-dependent: data that are primary for one purpose may be metadata for another. ATLAS metadata services must integrate and federate information from inhomogeneous sources and repositories, map metadata about logical or physics constructs to deployment and production constructs, provide a means to associate metadata at one level of granularity with processing or decision-making at another, offer a coherent and integrated view to physicists, and support both human use and programmatic access. In this paper we consider ATLAS metadata, metadata services, and metadata flow principally from the illustrative perspective of how disparate metadata are made available to executing jobs and, conversely, how metadata generated by such jobs are returned. We describe how metadata are read, how metadata are cached, and how metadata generated by jobs and the tasks of which they are a part are communicated, associated with data products, and preserved. We also discuss the principles that guide decision-making about metadata storage, replication, and access.

  10. Encoding atlases by randomized classification forests for efficient multi-atlas label propagation.

    Science.gov (United States)

    Zikic, D; Glocker, B; Criminisi, A

    2014-12-01

    We propose a method for multi-atlas label propagation (MALP) based on encoding the individual atlases by randomized classification forests. Most current approaches perform a non-linear registration between all atlases and the target image, followed by a sophisticated fusion scheme. While these approaches can achieve high accuracy, in general they do so at high computational cost. This might negatively affect the scalability to large databases and experimentation. To tackle this issue, we propose to use a small and deep classification forest to encode each atlas individually in reference to an aligned probabilistic atlas, resulting in an Atlas Forest (AF). Our classifier-based encoding differs from current MALP approaches, which represent each point in the atlas either directly as a single image/label value pair, or by a set of corresponding patches. At test time, each AF produces one probabilistic label estimate, and their fusion is done by averaging. Our scheme performs only one registration per target image, achieves good results with a simple fusion scheme, and allows for efficient experimentation. In contrast to standard forest schemes, in which each tree would be trained on all atlases, our approach retains the advantages of the standard MALP framework. The target-specific selection of atlases remains possible, and incorporation of new scans is straightforward without retraining. The evaluation on four different databases shows accuracy within the range of the state of the art at a significantly lower running time. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Consolidation of cloud computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Cordeiro, Cristovao; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall; Giordano, Domenico

    2017-01-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in resp...

  12. Consolidation of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Cordeiro, Cristovao; Di Girolamo, Alessandro; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall

    2016-01-01

    Throughout the first year of LHC Run 2, ATLAS Cloud Computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS Cloud Computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vac resources, streamlined usage of the High Level Trigger cloud for simulation and reconstruction, extreme scaling on Amazon EC2, and procurement of commercial cloud capacity in Europe. Building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems. ...

  13. Recent ATLAS Articles on WLAP

    CERN Multimedia

    Goldfarb, S

    2005-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Software Week Plenary 6-10 December 2004 North American ATLAS Physics Workshop (Tucson) 20-21 December 2004 (17 talks) Physics Analysis Tools Tutorial (Tucson) 19 December 2004 Full Chain Tutorial 21 September 2004 ATLAS Plenary Sessions, 17-18 February 2005 (17 talks) Coming soon: ATLAS Tutorial on Electroweak Physics, 14 Feb. 2005 Software Workshop, 21-22 February 2005 Click here to browse WLAP for all ATLAS lectures.

  14. ATLAS DBM Module Qualification

    Energy Technology Data Exchange (ETDEWEB)

    Soha, Aria [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gorisek, Andrej [J. Stefan Inst., Ljubljana (Slovenia); Zavrtanik, Marko [J. Stefan Inst., Ljubljana (Slovenia); Sokhranyi, Grygorii [J. Stefan Inst., Ljubljana (Slovenia); McGoldrick, Garrin [Univ. of Toronto, ON (Canada); Cerv, Matevz [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2014-06-18

    This is a technical scope of work (TSW) between the Fermi National Accelerator Laboratory (Fermilab) and the experimenters of Jozef Stefan Institute, CERN, and University of Toronto who have committed to participate in beam tests to be carried out during the 2014 Fermilab Test Beam Facility program. Chemical Vapour Deposition (CVD) diamond has a number of properties that make it attractive for high energy physics detector applications. Its large band-gap (5.5 eV) and large displacement energy (42 eV/atom) make it a material that is inherently radiation tolerant with very low leakage currents and high thermal conductivity. CVD diamond is being investigated by the RD42 Collaboration for use very close to LHC interaction regions, where the most extreme radiation conditions are found. This document builds on that work and proposes a highly spatially segmented diamond-based luminosity monitor to complement the time-segmented ATLAS Beam Conditions Monitor (BCM) so that, when Minimum Bias Trigger Scintillators (MTBS) and LUCID (LUminosity measurement using a Cherenkov Integrating Detector) have difficulty functioning, the ATLAS luminosity measurement is not compromised.

  15. Multi-atlas pancreas segmentation: Atlas selection based on vessel structure.

    Science.gov (United States)

    Karasawa, Ken'ichi; Oda, Masahiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Chu, Chengwen; Zheng, Guoyan; Rueckert, Daniel; Mori, Kensaku

    2017-07-01

    Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. ATLAS people can run!

    CERN Multimedia

    Claudia Marcelloni de Oliveira; Pauline Gagnon

    It must be all the training we are getting every day, running around trying to get everything ready for the start of the LHC next year. This year, the ATLAS runners were in fine form and came in force. Nine ATLAS teams signed up for the 37th Annual CERN Relay Race with six runners per team. Under a blasting sun on Wednesday 23rd May 2007, each team covered the distances of 1000m, 800m, 800m, 500m, 500m and 300m taking the runners around the whole Meyrin site, hills included. A small reception took place in the ATLAS secretariat a week later to award the ATLAS Cup to the best ATLAS team. For the details on this complex calculation which takes into account the age of each runner, their gender and the color of their shoes, see the July 2006 issue of ATLAS e-news. The ATLAS Running Athena Team, the only all-women team enrolled this year, won the much coveted ATLAS Cup for the second year in a row. In fact, they are so good that Peter Schmid and Patrick Fassnacht are wondering about reducing the women's bonus in...

  17. ATLAS Live: Collaborative Information Streams

    Energy Technology Data Exchange (ETDEWEB)

    Goldfarb, Steven [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States); Collaboration: ATLAS Collaboration

    2011-12-23

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at the web portals, using the CERN SSO system.

  18. ATLAS Live: Collaborative Information Streams

    International Nuclear Information System (INIS)

    Goldfarb, Steven

    2011-01-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at the web portals, using the CERN SSO system.

  19. ATLAS Overview Week at Brookhaven

    CERN Multimedia

    Pilcher, J

    Over 200 ATLAS participants gathered at Brookhaven National Laboratory during the first week of June for our annual overview week. Some system communities arrived early and held meetings on Saturday and Sunday, and the detector interface group (DIG) and Technical Coordination also took advantage of the time to discuss issues of interest for all detector systems. Sunday was also marked by a workshop on the possibilities for heavy ion physics with ATLAS. Beginning on Monday, and for the rest of the week, sessions were held in common in the well equipped Berkner Hall auditorium complex. Laptop computers became the norm for presentations and a wireless network kept laptop owners well connected. Most lunches and dinners were held on the lawn outside Berkner Hall. The weather was very cooperative and it was an extremely pleasant setting. This picture shows most of the participants from a view on the roof of Berkner Hall. Technical Coordination and Integration issues started the reports on Monday and became a...

  20. Engineering the ATLAS TAG Browser

    CERN Document Server

    Zhang, Q; The ATLAS collaboration

    2011-01-01

    ELSSI is a web-based event metadata (TAG) browser and event-level selection service for ATLAS. TAGs from all ATLAS physics and Monte Carlo data sets are routinely loaded into Oracle databases as an integral part of event processing. As data volumes increase, more and more sites are joining the distributed TAG data hosting topology. Meanwhile, TAG content and database schemata continue to evolve as new user requirements and additional sources of metadata emerge. All of this has posed many challenges to the development of ELSSI, which must support vast amounts of TAG data while source, content, geographic locations, and user query patterns may change over time. In this paper, we describe some of the challenges encountered in the process of developing ELSSI, and the software engineering strategies adopted to address those challenges. Approaches to management of access to data, browsing, data rendering, query building, query validation, execution, connection management, and communication with auxiliary services a...

  1. ATLAS FTK: Fast Track Trigger

    CERN Document Server

    Volpi, Guido; The ATLAS collaboration

    2015-01-01

    An overview of the ATLAS Fast Tracker processor is presented, reporting the design of the system, its expected performance, and the integration status. The next LHC runs, with a significant increase in instantaneous luminosity, will provide a big challenge to the trigger and data acquisition systems of all the experiments. An intensive use of the tracking information at the trigger level will be important to keep high efficiency in interesting events, despite the increase in multiple p-p collisions per bunch crossing (pile-up). In order to increase the use of tracks within the High Level Trigger (HLT), the ATLAS experiment planned the installation of an hardware processor dedicated to tracking: the Fast TracKer (FTK) processor. The FTK is designed to perform full scan track reconstruction at every Level-1 accept. To achieve this goal, the FTK uses a fully parallel architecture, with algorithms designed to exploit the computing power of custom VLSI chips, the Associative Memory, as well as modern FPGAs. The FT...

  2. Engineering the ATLAS TAG Browser

    CERN Document Server

    Zhang, Q; The ATLAS collaboration

    2011-01-01

    ELSSI is a web-based event metadata (TAG) browser and event-level selection service for ATLAS. TAGs from all ATLAS physics and Monte Carlo data sets are routinely loaded into Oracle databases as an integral part of event processing. As data volumes increase, more and more sites are joining the distributed TAG data hosting topology[1]. Meanwhile, TAG content and database schemata continue to evolve as new user requirements and additional sources of metadata emerge. All of this has posed many challenges to the development of ELSSI, which must support vast amounts of TAG data while source, content, geographic locations, and user query patterns may change over time. In this paper, we describe some of the challenges encountered in the process of developing ELSSI, and the software engineering strategies adopted to address those challenges. Approaches to management of access to data, browsing, data rendering, query building, query validation, execution, connection management, and communication with auxiliary service...

  3. Mixing and CP violation in the Bs system with ATLAS

    CERN Document Server

    Dearnaley, W; The ATLAS collaboration

    2014-01-01

    A measurement of the B^0_s →J/ψ φ decay parameters, updated to include flavour tagging is reported using 4.9 fb^-1 of integrated luminosity collected by the ATLAS detector from pp collisions recorded in 2011.

  4. Recent ATLAS Articles on WLAP

    CERN Multimedia

    J. Herr

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Physics Workshop 6-11 June 2005 June 2005 ATLAS Week Plenary Session Click here to browse WLAP for all ATLAS lectures.

  5. Large scale access tests and online interfaces to ATLAS conditions databases

    International Nuclear Information System (INIS)

    Amorim, A; Lopes, L; Pereira, P; Simoes, J; Soloviev, I; Burckhart, D; Schmitt, J V D; Caprini, M; Kolos, S

    2008-01-01

    The access of the ATLAS Trigger and Data Acquisition (TDAQ) system to the ATLAS Conditions Databases sets strong reliability and performance requirements on the database storage and access infrastructures. Several applications were developed to support the integration of Conditions database access with the online services in TDAQ, including the interface to the Information Services (IS) and to the TDAQ Configuration Databases. The information storage requirements were the motivation for the ONline A Synchronous Interface to COOL (ONASIC) from the Information Service (IS) to LCG/COOL databases. ONASIC avoids the possible backpressure from Online Database servers by managing a local cache. In parallel, OKS2COOL was developed to store Configuration Databases into an Offline Database with history record. The DBStressor application was developed to test and stress the access to the Conditions database using the LCG/COOL interface while operating in an integrated way as a TDAQ application. The performance scaling of simultaneous Conditions database read accesses was studied in the context of the ATLAS High Level Trigger large computing farms. A large set of tests were performed involving up to 1000 computing nodes that simultaneously accessed the LCG central database server infrastructure at CERN

  6. New format for ATLAS e-news

    CERN Multimedia

    Pauline Gagnon

    ATLAS e-news got a new look! As of November 30, 2007, we have a new format for ATLAS e-news. Please go to: http://atlas-service-enews.web.cern.ch/atlas-service-enews/index.html . ATLAS e-news will now be published on a weekly basis. If you are not an ATLAS colaboration member but still want to know how the ATLAS experiment is doing, we will soon have a version of ATLAS e-news intended for the general public. Information will be sent out in due time.

  7. The Digital Ageing Atlas: integrating the diversity of age-related changes into a unified resource.

    Science.gov (United States)

    Craig, Thomas; Smelick, Chris; Tacutu, Robi; Wuttke, Daniel; Wood, Shona H; Stanley, Henry; Janssens, Georges; Savitskaya, Ekaterina; Moskalev, Alexey; Arking, Robert; de Magalhães, João Pedro

    2015-01-01

    Multiple studies characterizing the human ageing phenotype have been conducted for decades. However, there is no centralized resource in which data on multiple age-related changes are collated. Currently, researchers must consult several sources, including primary publications, in order to obtain age-related data at various levels. To address this and facilitate integrative, system-level studies of ageing we developed the Digital Ageing Atlas (DAA). The DAA is a one-stop collection of human age-related data covering different biological levels (molecular, cellular, physiological, psychological and pathological) that is freely available online (http://ageing-map.org/). Each of the >3000 age-related changes is associated with a specific tissue and has its own page displaying a variety of information, including at least one reference. Age-related changes can also be linked to each other in hierarchical trees to represent different types of relationships. In addition, we developed an intuitive and user-friendly interface that allows searching, browsing and retrieving information in an integrated and interactive fashion. Overall, the DAA offers a new approach to systemizing ageing resources, providing a manually-curated and readily accessible source of age-related changes. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Integration of ROOT Notebooks as a Web-based ATLAS Analysis tool for Public Data Releases and Outreach

    CERN Document Server

    Abah, Anthony

    2016-01-01

    The project worked on the development of a physics analysis and its software under ROOT framework and Jupyter notebooks for the the ATLAS Outreach and the Naples teams. This analysis is created in the context of the release of data and Monte Carlo samples by the ATLAS collaboration. The project focuses on the enhancement of the recent opendata.atlas.cern web platform to be used as educational resources for university students and new researches. The generated analysis structure and tutorials will be used to extend the participation of students from other locations around the World. We conclude the project with the creation of a complete notebook representing the so-called W analysis in C + + language for the mentioned platform.

  9. ATLAS Virtual Visits bringing the world into the ATLAS control room

    CERN Document Server

    AUTHOR|(CDS)2051192; The ATLAS collaboration; Yacoob, Sahal

    2016-01-01

    ATLAS Virtual Visits is a project initiated in 2011 for the Education & Outreach program of the ATLAS Experiment at CERN. Its goal is to promote public appreciation of the LHC physics program and particle physics, in general, through direct dialogue between ATLAS physicists and remote audiences. A Virtual Visit is an IP-based videoconference, coupled with a public webcast and video recording, between ATLAS physicists and remote locations around the world, that typically include high school or university classrooms, Masterclasses, science fairs, or other special events, usually hosted by collaboration members. Over the past two years, more than 10,000 people, from all of the world’s continents, have actively participated in ATLAS Virtual Visits, with many more enjoying the experience from the publicly available webcasts and recordings. We present an overview of our experience and discuss potential development for the future.

  10. Rare and semi-rare decays at ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00213194; The ATLAS collaboration

    2016-01-01

    The measurements of the rare $B^0$-mesons decays processes performed by the ATLAS experiment at LHC are reviewed. Particular attention will be given to the measurement of the branching ratio of the $B^0_s$ and $B^0_d$ mesons decays into a pair of muons with the full Run 1 dataset corresponding to an integrated luminosity of 25 $\\rm{fb^{-1}}$.

  11. The 3rd ATLAS Domestic Standard Problem for Improvement of Safety Analysis Technology

    International Nuclear Information System (INIS)

    Choi, Ki-Yong; Kang, Kyoung-Ho; Park, Yusun; Kim, Jongrok; Bae, Byoung-Uhn; Choi, Nam-Hyun

    2014-01-01

    The third ATLAS DSP (domestic standard problem exercise) was launched at the end of 2012 in response to the strong need for continuation of the ATLAS DSP. A guillotine break of a main steam line without LOOP at a zero power condition was selected as a target scenario, and it was successfully completed in the beginning of 2014. In the 3 rd ATLAS DSP, comprehensive utilization of the integral effect test data was made by dividing analysis with three topics; 1. scale-up where extrapolation of ATLAS IET data was investigated 2. 3D analysis where how much improvement can be obtained by 3D modeling was studied 3. 1D sensitivity analysis where the key phenomena affecting the SLB simulation were identified and the best modeling guideline was achieved. Through such DSP exercises, it has been possible to effectively utilize high-quality ATLAS experimental data of to enhance thermal-hydraulic understanding and to validate the safety analysis codes. A strong human network and technical expertise sharing among the various nuclear experts are also important outcomes from this program

  12. Experimental Results of A1.2 Test for OECD-ATLAS Project

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Kyoung-Ho; Bae, Byoung-Uhn; Park, Yu-Sun; Kim, Jong-Rok; Choi, Nam-Hyun; Choi, Ki-Yong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In order to meet the international interests in the multiple high-risk design extension conditions (DECs) raised after the Fukushima accident, KAERI (Korea Atomic Energy Research Institute) is operating an OECD/NEA project (hereafter, OECD-ATLAS project) by utilizing a thermal-hydraulic integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation). As for a prolonged SBO transient of the OECD-ATLAS project, two tests, named A1.1 and A1.2, were determined to be performed. In particular, passive safety systems are considered as the most promising alternatives to reinforce the safety and reliability of an ultimate heat removal system without any operator actions in the SBO transients. As one of the new safety improvement concepts to mitigate an SBO accident efficiently, a cooling and operational performance of the passive auxiliary feedwater system (PAFS) is investigated in the framework of the OECD-ATLAS project to produce clearer knowledge of the actual phenomena and to provide the best guidelines for accident management. As the second test of the OECD-ATLAS project, the A1.2 test was conducted to simulate a prolonged SBO with asymmetric secondary cooling through the supply of passive auxiliary feedwater only to SG-2. When the collapsed water level of steam generator reached a wide range of 25%, PAFS was actuated. PAFS played a key role in cooling down the primary system by the heat transfer and the natural circulation. With the actuation of PAFS, the fluid temperatures at the core inlet and outlet started to decrease without any excursion of the maximum heater surface temperature in the core. This integral effect test data of A1.2 test can be used to evaluate the prediction capability of existing safety analysis codes and identify any code deficiency for an SBO simulation with an operation of a passive system such as PAFS.

  13. Virtual Machine Logbook - Enabling virtualization for ATLAS

    International Nuclear Information System (INIS)

    Yao Yushu; Calafiura, Paolo; Leggett, Charles; Poffet, Julien; Cavalli, Andrea; Frederic, Bapst

    2010-01-01

    ATLAS software has been developed mostly on CERN linux cluster lxplus or on similar facilities at the experiment Tier 1 centers. The fast rise of virtualization technology has the potential to change this model, turning every laptop or desktop into an ATLAS analysis platform. In the context of the CernVM project we are developing a suite of tools and CernVM plug-in extensions to promote the use of virtualization for ATLAS analysis and software development. The Virtual Machine Logbook (VML), in particular, is an application to organize work of physicists on multiple projects, logging their progress, and speeding up ''context switches'' from one project to another. An important feature of VML is the ability to share with a single 'click' the status of a given project with other colleagues. VML builds upon the save and restore capabilities of mainstream virtualization software like VMware, and provides a technology-independent client interface to them. A lot of emphasis in the design and implementation has gone into optimizing the save and restore process to makepractical to store many VML entries on a typical laptop disk or to share a VML entry over the network. At the same time, taking advantage of CernVM's plugin capabilities, we are extending the CernVM platform to help increase the usability of ATLAS software. For example, we added the ability to start the ATLAS event display on any computer running CernVM simply by clicking a button in a web browser. We want to integrate seamlessly VML with CernVM unique file system design to distribute efficiently ATLAS software on every physicist computer. The CernVM File System (CVMFS) download files on-demand via HTTP, and cache it locally for future use. This reduces by one order of magnitude the download sizes, making practical for a developer to work with multiple software releases on a virtual machine.

  14. Ultimate Performance of the ATLAS Superconducting Solenoid

    CERN Document Server

    Ruber, R; Kawai, M; Kondo, Y; Doi, Y; Haruyama, T; Haug, F; Kate, H ten; Kondo, T; Pirotte, O; Metselaar, J; Mizumaki, S; Olesen, G; Sbrissa, E; Yamamoto, A

    2007-01-01

    A 2 tesla, 7730 ampere, 39 MJ, 45 mm thin superconducting solenoid with a 2.3 meters warm bore and 5.3 meters length, is installed in the center of the ATLAS detector and successfully commissioned. The solenoid shares its cryostat with one of the detector's calorimeters and provides the magnetic field required for the inner detectors to accurately track collision products from the LHC at CERN. After several years of a stepwise construction and test program, the solenoid integration 100 meters underground in the ATLAS cavern is completed. Following the on-surface acceptance test, the solenoid is now operated with its final cryogenic, powering and control system. A re-validation of all essential operating parameters is completed. The performance and test results of underground operation are reported and compared to those previously measured.

  15. An Oracle-based event index for ATLAS

    Science.gov (United States)

    Gallas, E. J.; Dimitrov, G.; Vasileva, P.; Baranowski, Z.; Canali, L.; Dumitru, A.; Formica, A.; ATLAS Collaboration

    2017-10-01

    The ATLAS Eventlndex System has amassed a set of key quantities for a large number of ATLAS events into a Hadoop based infrastructure for the purpose of providing the experiment with a number of event-wise services. Collecting this data in one place provides the opportunity to investigate various storage formats and technologies and assess which best serve the various use cases as well as consider what other benefits alternative storage systems provide. In this presentation we describe how the data are imported into an Oracle RDBMS (relational database management system), the services we have built based on this architecture, and our experience with it. We’ve indexed about 26 billion real data events thus far and have designed the system to accommodate future data which has expected rates of 5 and 20 billion events per year. We have found this system offers outstanding performance for some fundamental use cases. In addition, profiting from the co-location of this data with other complementary metadata in ATLAS, the system has been easily extended to perform essential assessments of data integrity and completeness and to identify event duplication, including at what step in processing the duplication occurred.

  16. Supporting ATLAS

    CERN Multimedia

    maximilien brice

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator.

  17. Integration of extracellular RNA profiling data using metadata, biomedical ontologies and Linked Data technologies

    Directory of Open Access Journals (Sweden)

    Sai Lakshmi Subramanian

    2015-08-01

    Full Text Available The large diversity and volume of extracellular RNA (exRNA data that will form the basis of the exRNA Atlas generated by the Extracellular RNA Communication Consortium pose a substantial data integration challenge. We here present the strategy that is being implemented by the exRNA Data Management and Resource Repository, which employs metadata, biomedical ontologies and Linked Data technologies, such as Resource Description Framework to integrate a diverse set of exRNA profiles into an exRNA Atlas and enable integrative exRNA analysis. We focus on the following three specific data integration tasks: (a selection of samples from a virtual biorepository for exRNA profiling and for inclusion in the exRNA Atlas; (b retrieval of a data slice from the exRNA Atlas for integrative analysis and (c interpretation of exRNA analysis results in the context of pathways and networks. As exRNA profiling gains wide adoption in the research community, we anticipate that the strategies discussed here will increasingly be required to enable data reuse and to facilitate integrative analysis of exRNA data.

  18. Integration of extracellular RNA profiling data using metadata, biomedical ontologies and Linked Data technologies.

    Science.gov (United States)

    Subramanian, Sai Lakshmi; Kitchen, Robert R; Alexander, Roger; Carter, Bob S; Cheung, Kei-Hoi; Laurent, Louise C; Pico, Alexander; Roberts, Lewis R; Roth, Matthew E; Rozowsky, Joel S; Su, Andrew I; Gerstein, Mark B; Milosavljevic, Aleksandar

    2015-01-01

    The large diversity and volume of extracellular RNA (exRNA) data that will form the basis of the exRNA Atlas generated by the Extracellular RNA Communication Consortium pose a substantial data integration challenge. We here present the strategy that is being implemented by the exRNA Data Management and Resource Repository, which employs metadata, biomedical ontologies and Linked Data technologies, such as Resource Description Framework to integrate a diverse set of exRNA profiles into an exRNA Atlas and enable integrative exRNA analysis. We focus on the following three specific data integration tasks: (a) selection of samples from a virtual biorepository for exRNA profiling and for inclusion in the exRNA Atlas; (b) retrieval of a data slice from the exRNA Atlas for integrative analysis and (c) interpretation of exRNA analysis results in the context of pathways and networks. As exRNA profiling gains wide adoption in the research community, we anticipate that the strategies discussed here will increasingly be required to enable data reuse and to facilitate integrative analysis of exRNA data.

  19. Exploring the human body space: A geographical information system based anatomical atlas

    Directory of Open Access Journals (Sweden)

    Antonio Barbeito

    2016-06-01

    Full Text Available Anatomical atlases allow mapping the anatomical structures of the human body. Early versions of these systems consisted of analogical representations with informative text and labeled images of the human body. With computer systems, digital versions emerged and the third and fourth dimensions were introduced. Consequently, these systems increased their efficiency, allowing more realistic visualizations with improved interactivity and functionality. The 4D atlases allow modeling changes over time on the structures represented. The anatomical atlases based on geographic information system (GIS environments allow the creation of platforms with a high degree of interactivity and new tools to explore and analyze the human body. In this study we expand the functions of a human body representation system by creating new vector data, topology, functions, and an improved user interface. The new prototype emulates a 3D GIS with a topological model of the human body, replicates the information provided by anatomical atlases, and provides a higher level of functionality and interactivity. At this stage, the developed system is intended to be used as an educational tool and integrates into the same interface the typical representations of surface and sectional atlases.

  20. Enabling the ATLAS Experiment at the LHC for High Performance Computing

    CERN Document Server

    AUTHOR|(CDS)2091107; Ereditato, Antonio

    In this thesis, I studied the feasibility of running computer data analysis programs from the Worldwide LHC Computing Grid, in particular large-scale simulations of the ATLAS experiment at the CERN LHC, on current general purpose High Performance Computing (HPC) systems. An approach for integrating HPC systems into the Grid is proposed, which has been implemented and tested on the „Todi” HPC machine at the Swiss National Supercomputing Centre (CSCS). Over the course of the test, more than 500000 CPU-hours of processing time have been provided to ATLAS, which is roughly equivalent to the combined computing power of the two ATLAS clusters at the University of Bern. This showed that current HPC systems can be used to efficiently run large-scale simulations of the ATLAS detector and of the detected physics processes. As a first conclusion of my work, one can argue that, in perspective, running large-scale tasks on a few large machines might be more cost-effective than running on relatively small dedicated com...

  1. Three-dimensional stereotactic atlas of the adult human skull correlated with the brain, cranial nerves, and intracranial vasculature.

    Science.gov (United States)

    Nowinski, Wieslaw L; Thaung, Thant Shoon Let; Chua, Beng Choon; Yi, Su Hnin Wut; Ngai, Vincent; Yang, Yili; Chrzan, Robert; Urbanik, Andrzej

    2015-05-15

    Although the adult human skull is a complex and multifunctional structure, its 3D, complete, realistic, and stereotactic atlas has not yet been created. This work addresses the construction of a 3D interactive atlas of the adult human skull spatially correlated with the brain, cranial nerves, and intracranial vasculature. The process of atlas construction included computed tomography (CT) high-resolution scan acquisition, skull extraction, skull parcellation, 3D disarticulated bone surface modeling, 3D model simplification, brain-skull registration, 3D surface editing, 3D surface naming and color-coding, integration of the CT-derived 3D bony models with the existing brain atlas, and validation. The virtual skull model created is complete with all 29 bones, including the auditory ossicles (being among the smallest bones). It contains all typical bony features and landmarks. The created skull model is superior to the existing skull models in terms of completeness, realism, and integration with the brain along with blood vessels and cranial nerves. This skull atlas is valuable for medical students and residents to easily get familiarized with the skull and surrounding anatomy with a few clicks. The atlas is also useful for educators to prepare teaching materials. It may potentially serve as a reference aid in the reading and operating rooms. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. A module concept for the upgrades of the ATLAS pixel system using the novel SLID-ICV vertical integration technology

    CERN Document Server

    Beimforde, M; Macchiolo, A; Moser, H G; Nisius, R; Richter, R H; Weigell, P; 10.1088/1748-0221/5/12/C12025

    2010-01-01

    The presented R&D activity is focused on the development of a new pixel module concept for the foreseen upgrades of the ATLAS detector towards the Super LHC employing thin n-in-p silicon sensors together with a novel vertical integration technology. A first set of pixel sensors with active thicknesses of 75 μm and 150 μm has been produced using a thinning technique developed at the Max-Planck-Institut für Physik (MPP) and the MPI Semiconductor Laboratory (HLL). Charge Collection Efficiency (CCE) measurements of these sensors irradiated with 26 MeV protons up to a particle fluence of 1016neqcm−2 have been performed, yielding higher values than expected from the present radiation damage models. The novel integration technology, developed by the Fraunhofer Institut EMFT, consists of the Solid-Liquid InterDiffusion (SLID) interconnection, being an alternative to the standard solder bump-bonding, and Inter-Chip Vias (ICVs) for routing signals vertically through electronics. This allows for extracting the ...

  3. Consolidation of cloud computing in ATLAS

    Science.gov (United States)

    Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration

    2017-10-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.

  4. Report to users of Atlas

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1996-06-01

    This report contains the following topics: Status of the ATLAS Accelerator; Highlights of Recent Research at ATLAS; Program Advisory Committee; ATLAS User Group Executive Committee; FMA Information Available On The World Wide Web; Conference on Nuclear Structure at the Limits; and Workshop on Experiments with Gammasphere at ATLAS

  5. ATLAS Detector Simulation in the Integrated Simulation Framework applied to the W Boson Mass Measurement

    CERN Document Server

    Ritsch, Elmar; Froidevaux, Daniel; Salzburger, Andreas

    One of the cornerstones for the success of the ATLAS experiment at the Large Hadron Collider (LHC) is a very accurate Monte Carlo detector simulation. However, a limit is being reached regarding the amount of simulated data which can be produced and stored with the computing resources available through the worldwide LHC computing grid (WLCG). The Integrated Simulation Framework (ISF) is a novel approach to detector simula- tion which enables a more efficient use of these computing resources and thus allows for the generation of more simulated data. Various simulation technologies are combined to allow for faster simulation approaches which are targeted at the specific needs of in- dividual physics studies. Costly full simulation technologies are only used where high accuracy is required by physics analyses and fast simulation technologies are applied everywhere else. As one of the first applications of the ISF, a new combined simulation approach is developed for the generation of detector calibration samples ...

  6. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  7. Two-stage atlas subset selection in multi-atlas based image segmentation.

    Science.gov (United States)

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas

  8. Two-stage atlas subset selection in multi-atlas based image segmentation

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2015-01-01

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  9. The performance and development of the ATLAS Inner Detector Trigger

    International Nuclear Information System (INIS)

    Washbrook, A

    2014-01-01

    A description of the ATLAS Inner Detector (ID) software trigger algorithms and the performance of the ID trigger for LHC Run 1 are presented, as well as prospects for a redesign of the tracking algorithms in Run 2. The ID trigger HLT algorithms are essential for a large number of signatures within the ATLAS trigger. During the shutdown, modifications are being made to the LHC machine, to increase both the beam energy and luminosity. This in turn poses significant challenges for the trigger algorithms both in terms of execution time and physics performance. To meet these challenges the ATLAS HLT software is being restructured to run as a single stage rather than in the two distinct levels present during the Run 1 operation. This is allowing the tracking algorithms to be redesigned to make optimal use of the CPU resources available and to integrate new detector systems being added to ATLAS for post-shutdown running. Expected future improvements in the timing and efficiencies of the Inner Detector triggers are also discussed. In addition, potential improvements in the algorithm performance resulting from the additional spacepoint information from the new Insertable B-Layer are presented

  10. Electronics Design and System Integration of the ATLAS New Small Wheels

    CERN Document Server

    Gkountoumis, Panagiotis; The ATLAS collaboration

    2016-01-01

    The upgrades of the LHC accelerator and the experiments in 2019/20 and 2023/24 will allow to in-crease the luminosity to 2×1034 cm−2s−1 and 5-7×1034 cm−2s−1, respectively. For the HL-LHC phase, the expected mean number of interactions per bunch crossing will be 55 at 2×1034 cm−2s−1 and ~140 at 5×1034 cm−2s−1. This increase drastically impacts the ATLAS trigger and trigger rates. For the ATLAS Muon Spectrometer, a replacement of the innermost endcap stations, the so-called “Small Wheels” operating in a magnetic field, is therefore planned for 2019/20 to be able to maintain a low pT threshold for single muon and excellent tracking capability in the HL-LHC regime. The New Small Wheels will feature two new detector technologies: Resistive Micromegas and small strip Thin Gap Chambers comprising a system of ~2.4 million readout channels. Both detector technologies will provide trigger and tracking primitives fully compliant with the post-2024 HL-LHC operation. To al-low for some safety margi...

  11. ATLAS Detector Interface Group

    CERN Multimedia

    Mapelli, L

    Originally organised as a sub-system in the DAQ/EF-1 Prototype Project, the Detector Interface Group (DIG) was an information exchange channel between the Detector systems and the Data Acquisition to provide critical detector information for prototype design and detector integration. After the reorganisation of the Trigger/DAQ Project and of Technical Coordination, the necessity to provide an adequate context for integration of detectors with the Trigger and DAQ lead to organisation of the DIG as one of the activities of Technical Coordination. Such an organisation emphasises the ATLAS wide coordination of the Trigger and DAQ exploitation aspects, which go beyond the domain of the Trigger/DAQ project itself. As part of Technical Coordination, the DIG provides the natural environment for the common work of Trigger/DAQ and detector experts. A DIG forum for a wide discussion of all the detector and Trigger/DAQ integration issues. A more restricted DIG group for the practical organisation and implementation o...

  12. Searches for SUSY signals at ATLAS

    CERN Document Server

    Meloni, Federico; The ATLAS collaboration

    2017-01-01

    The High Luminosity-Large Hadron Collider (HL-LHC) is expected to start in 2026 and to pro- vide an integrated luminosity of 3000 fb−1 in ten years, a factor 10 more than what will be collected by 2023. This high statistics will allow ATLAS to improve searches for new physics at the TeV scale. The search prospects for Supersymmetry are presented, with a programme spanning from strong to electroweak production of sparticles.

  13. The ATLAS Analysis Model

    CERN Multimedia

    Amir Farbin

    The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...

  14. Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service

    Science.gov (United States)

    Cameron, D.; Filipčič, A.; Guan, W.; Tsulaia, V.; Walker, R.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from those that comprise the Grid computing of most experiments, therefore exploiting them requires a change in strategy for the experiment. They may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The Advanced Resource Connector Computing Element (ARC CE) with its nonintrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the ATLAS Event Service (AES) primarily to address the issue of jobs that can be terminated at any point when opportunistic computing capacity is needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in a restrictive environment. In addition to the technical details, results from deployment of this solution in the SuperMUC HPC centre in Munich are shown.

  15. The silicon microstrip sensors of the ATLAS semiconductor tracker

    Energy Technology Data Exchange (ETDEWEB)

    ATLAS SCT Collaboration; Spieler, Helmuth G.

    2007-04-13

    This paper describes the AC-coupled, single-sided, p-in-n silicon microstrip sensors used in the Semiconductor Tracker (SCT) of the ATLAS experiment at the CERN Large Hadron Collider (LHC). The sensor requirements, specifications and designs are discussed, together with the qualification and quality assurance procedures adopted for their production. The measured sensor performance is presented, both initially and after irradiation to the fluence anticipated after 10 years of LHC operation. The sensors are now successfully assembled within the detecting modules of the SCT, and the SCT tracker is completed and integrated within the ATLAS Inner Detector. Hamamatsu Photonics Ltd. supplied 92.2percent of the 15,392 installed sensors, with the remainder supplied by CiS.

  16. The silicon microstrip sensors of the ATLAS semiconductor tracker

    International Nuclear Information System (INIS)

    ATLAS SCT Collaboration; Spieler, Helmuth G.

    2007-01-01

    This paper describes the AC-coupled, single-sided, p-in-n silicon microstrip sensors used in the Semiconductor Tracker (SCT) of the ATLAS experiment at the CERN Large Hadron Collider (LHC). The sensor requirements, specifications and designs are discussed, together with the qualification and quality assurance procedures adopted for their production. The measured sensor performance is presented, both initially and after irradiation to the fluence anticipated after 10 years of LHC operation. The sensors are now successfully assembled within the detecting modules of the SCT, and the SCT tracker is completed and integrated within the ATLAS Inner Detector. Hamamatsu Photonics Ltd. supplied 92.2percent of the 15,392 installed sensors, with the remainder supplied by CiS

  17. Rare and semi-rare decays at ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00213194; The ATLAS collaboration

    2016-01-01

    The measurements of the rare $B^0$-meson-decay processes performed by the ATLAS experiment at the LHC are reviewed. Particular attention will be given to the measurement of the branching ratio of the $B^0_s$ and $B^0_d$ mesons decays into a pair of muons with the full Run 1 dataset corresponding to an integrated luminosity of 25 $\\rm{fb^{-1}}$.

  18. ATLAS : magnet industrial production Conference MT17

    CERN Multimedia

    2001-01-01

    With overall dimensions of 26 meters in length and 20 meters in diameter, the ATLAS magnet system is the largest integrated superconducting magnet ever built. The system is made up of four super-conducting magnets, a power supply, cryogenics, vacuum, control, and safety systems. The coils are built with Aluminum stabilized NbTi/Cu superconductor indirectly cooled at 4.5 K by liquid Helium forced flow.

  19. A digital 3D atlas of the marmoset brain based on multi-modal MRI.

    Science.gov (United States)

    Liu, Cirong; Ye, Frank Q; Yen, Cecil Chern-Chyi; Newman, John D; Glen, Daniel; Leopold, David A; Silva, Afonso C

    2018-04-01

    The common marmoset (Callithrix jacchus) is a New-World monkey of growing interest in neuroscience. Magnetic resonance imaging (MRI) is an essential tool to unveil the anatomical and functional organization of the marmoset brain. To facilitate identification of regions of interest, it is desirable to register MR images to an atlas of the brain. However, currently available atlases of the marmoset brain are mainly based on 2D histological data, which are difficult to apply to 3D imaging techniques. Here, we constructed a 3D digital atlas based on high-resolution ex-vivo MRI images, including magnetization transfer ratio (a T1-like contrast), T2w images, and multi-shell diffusion MRI. Based on the multi-modal MRI images, we manually delineated 54 cortical areas and 16 subcortical regions on one hemisphere of the brain (the core version). The 54 cortical areas were merged into 13 larger cortical regions according to their locations to yield a coarse version of the atlas, and also parcellated into 106 sub-regions using a connectivity-based parcellation method to produce a refined atlas. Finally, we compared the new atlas set with existing histology atlases and demonstrated its applications in connectome studies, and in resting state and stimulus-based fMRI. The atlas set has been integrated into the widely-distributed neuroimaging data analysis software AFNI and SUMA, providing a readily usable multi-modal template space with multi-level anatomical labels (including labels from the Paxinos atlas) that can facilitate various neuroimaging studies of marmosets. Published by Elsevier Inc.

  20. Mapping of unfolding states of integral helical membrane proteins by GPS-NMR and scattering techniques

    DEFF Research Database (Denmark)

    Calcutta, Antonello; Jessen, Christian M; Behrens, Manja Annette

    2012-01-01

    induced by unfolding of an integral membrane protein, namely TFE-induced unfolding of KcsA solubilized by the n-dodecyl ß-d-maltoside (DDM) surfactant is investigated by the recently introduced GPS-NMR (Global Protein folding State mapping by multivariate NMR) (Malmendal et al., PlosONE 5, e10262 (2010......)) along with dynamic light scattering (DLS) and small-angle X-ray scattering (SAXS). GPS-NMR is used as a tool for fast analysis of the protein unfolding processes upon external perturbation, and DLS and SAXS are used for further structural characterization of the unfolding states. The combination allows...

  1. Ageing test of the ATLAS RPCs at X5-GIF

    International Nuclear Information System (INIS)

    Aielli, G.; Alviggi, M.; Ammosov, V.

    2004-01-01

    An ageing test of three ATLAS production RPC stations is in course at X5-GIF, the CERN irradiation facility. The chamber efficiencies are monitored using cosmic rays triggered by a scintillator hodoscope. Higher statistics measurements are made when the X5 muon beam is available. We report here the measurements of the efficiency versus operating voltage at different source intensities, up to a maximum counting rate of about 700 Hz/cm 2 . We describe the performance of the chambers during the test up to an overall ageing of 4 ATLAS equivalent years corresponding to an integrated charge of 0.12 C/cm 2 , including a safety factor of 5

  2. Measurement of the inelastic proton-proton cross section with the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Zenis, Tibor [Comenius University Bratislava (Slovakia); Collaboration: ATLAS Collaboration

    2013-04-15

    A measurement of the inelastic proton-proton cross-section at centre-of-mass energy of Central diffraction in proton-proton collisions at {radical}(s) = 7TeV using the ATLAS detector at the Large Hadron Collider is presented. Events are selected by requiring hits in scintillator counters mounted in the forward region of the ATLAS detector and the dataset corresponding to an integrated luminosity of 20{mu}b{sup -1}. In addition, the total cross-section is studied as a function of the rapidity gap size measured with the inner detector and calorimetry.

  3. ATLAS Distributed Computing

    CERN Document Server

    Schovancova, J; The ATLAS collaboration

    2011-01-01

    The poster details the different aspects of the ATLAS Distributed Computing experience after the first year of LHC data taking. We describe the performance of the ATLAS distributed computing system and the lessons learned during the 2010 run, pointing out parts of the system which were in a good shape, and also spotting areas which required improvements. Improvements ranged from hardware upgrade on the ATLAS Tier-0 computing pools to improve data distribution rates, tuning of FTS channels between CERN and Tier-1s, and studying data access patterns for Grid analysis to improve the global processing rate. We show recent software development driven by operational needs with emphasis on data management and job execution in the ATLAS production system.

  4. Development and test of the DAQ system for a Micromegas prototype installed into the ATLAS experiment

    CERN Document Server

    Zibell, Andre; The ATLAS collaboration; Bianco, Michele; Martoiu, Victor Sorin

    2015-01-01

    A Micromegas (MM) quadruplet prototype with an active area of 0.5 m$^2$ that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. The integration of this prototype detector into the ATLAS data acquisition system using custom ATCA equipment is presented. An ATLAS compatible ReadOutDriver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used in order to transmit the data after generating valid event fragments to the high-level Read Out System (ROS). The SRU will be synchronized with the LHC bunch crossing clock (40.08 MHz) and will receive the Level-1 trigger signals from the Central Trigger Processor (CTP) through the TTCrx receiver ASIC. The configuration of the system will be driven directly from the ATLAS Run Control System. By using the ATLAS TDAQ Soft...

  5. Virtualization of the ATLAS software environment on a shared HPC system

    CERN Document Server

    Gamel, Anton Josef; The ATLAS collaboration

    2017-01-01

    The shared HPC cluster NEMO at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines incorporating the ATLAS software environment analogously to a WLCG center. This concept allows to run both data analysis and production on the HPC host system which is connected to the existing Tier2/Tier3 infrastructure. Schedulers of the two clusters were integrated in a dynamic, on-demand way. An automatically generated, fully functional virtual machine image provides access to the local user environment. The performance in the virtualized environment is evaluated for typical High-Energy Physics applications.

  6. Recent results from ATLAS on B Physics and Quarkonia

    International Nuclear Information System (INIS)

    Jones, RWL

    2016-01-01

    Recent results from the extensive programme of heavy flavour and onia studies in ATLAS are presented. These benefit from the very high integrated luminosity in the first running period at the LHC at 7 and 8 TeV, and some are now extended to include information from the latest 13 TeV running

  7. ATLAS Review Office

    CERN Multimedia

    Szeless, B

    The ATLAS internal reviews, be it the mandatory Production Readiness Reviews, the now newly installed Production Advancement Reviews, or the more and more requested different Design Reviews, have become a part of our ATLAS culture over the past years. The Activity Systems Status Overviews are, for the time being, a one in time event and should be held for each system as soon as possible to have some meaning. There seems to a consensus that the reviews have become a useful project tool for the ATLAS management but even more so for the sub-systems themselves making achievements as well as possible shortcomings visible. One other recognized byproduct is the increasing cross talk between the systems, a very important ingredient to make profit all the systems from the large collective knowledge we dispose of in ATLAS. In the last two months, the first two PARs were organized for the MDT End Caps and the TRT Barrel Modules, both part of the US contribution to the ATLAS Project. Furthermore several different design...

  8. High-Performance Scalable Information Service for the ATLAS Experiment

    International Nuclear Information System (INIS)

    Kolos, S; Boutsioukis, G; Hauser, R

    2012-01-01

    The ATLAS[1] experiment is operated by a highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ)[2] project. The IS provides a high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about a hundred gigabytes of information which is being constantly updated with the update interval varying from a second to a few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In the latter case IS subscribers receive information within a few milliseconds after it was updated. IS can handle arbitrary types of information, including histograms produced by the HLT applications, and provides C++, Java and Python API. The Information Service is a unique source of information for the majority of the online monitoring analysis and GUI applications used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with a latency of a few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Each information

  9. Berliner Philarmoniker ATLAS visit

    CERN Multimedia

    ATLAS Collaboration

    2017-01-01

    The Berliner Philarmoniker in on tour through Europe. They stopped on June 27th in Geneva, for a concert at the Victoria Hall. An ATLAS visit was organised the morning after, lead by the ATLAS spokesperson Karl Jakobs (welcome and overview talk) and two ATLAS guides (AVC visit and 3D movie).

  10. Prototype Strip Barrel Modules for the ATLAS ITk Strip Detector

    CERN Document Server

    Sawyer, Craig; The ATLAS collaboration

    2017-01-01

    The module design for the Phase II Upgrade of the new ATLAS Inner Tracker (ITk) detector at the LHC employs integrated low mass assembly using single-sided flexible circuits with readout ASICs and a powering circuit incorporating control and monitoring of HV, LV and temperature on the module. Both readout and powering circuits are glued directly onto the silicon sensor surface resulting in a fully integrated, extremely low radiation length module which simultaneously reduces the material requirements of the local support structure by allowing a reduced width stave structure to be employed. Such a module concept has now been fully demonstrated using so-called ABC130 and HCC130 ASICs fabricated in 130nm CMOS technology to readout ATLAS12 n+-in-p silicon strip sensors. Low voltage powering for these demonstrator modules has been realised by utilising a DCDC powerboard based around the CERN FEAST ASIC. This powerboard incorporates an HV multiplexing switch based on a Panasonic GaN transistor. Control and monitori...

  11. Global Data Grid Efforts for ATLAS

    CERN Multimedia

    Gardner, R.

    2001-01-01

    Over the past two years computational data grids have emerged as a promising new technology for large scale, data-intensive computing required by the LHC experiments, as outlined by the recent "Hoffman" review panel that addressed the LHC computing challenge. The problem essentially is to seamlessly link physicists to petabyte-scale data and computing resources, distributed worldwide, and connected by high-bandwidth research networks. Several new collaborative initiatives in Europe, the United States, and Asia have formed to address the problem. These projects are of great interest to ATLAS physicists and software developers since their objective is to offer tools that can be integrated into the core ATLAS application framework for distributed event reconstruction, Monte Carlo simulation, and data analysis, making it possible for individuals and groups of physicists to share information, data, and computing resources in new ways and at scales not previously attempted. In addition, much of the distributed IT...

  12. Software Validation in ATLAS

    International Nuclear Information System (INIS)

    Hodgkinson, Mark; Seuster, Rolf; Simmons, Brinick; Sherwood, Peter; Rousseau, David

    2012-01-01

    The ATLAS collaboration operates an extensive set of protocols to validate the quality of the offline software in a timely manner. This is essential in order to process the large amounts of data being collected by the ATLAS detector in 2011 without complications on the offline software side. We will discuss a number of different strategies used to validate the ATLAS offline software; running the ATLAS framework software, Athena, in a variety of configurations daily on each nightly build via the ATLAS Nightly System (ATN) and Run Time Tester (RTT) systems; the monitoring of these tests and checking the compilation of the software via distributed teams of rotating shifters; monitoring of and follow up on bug reports by the shifter teams and periodic software cleaning weeks to improve the quality of the offline software further.

  13. ATLAS Open Data project

    CERN Document Server

    The ATLAS collaboration

    2018-01-01

    The current ATLAS model of Open Access to recorded and simulated data offers the opportunity to access datasets with a focus on education, training and outreach. This mandate supports the creation of platforms, projects, software, and educational products used all over the planet. We describe the overall status of ATLAS Open Data (http://opendata.atlas.cern) activities, from core ATLAS activities and releases to individual and group efforts, as well as educational programs, and final web or software-based (and hard-copy) products that have been produced or are under development. The relatively large number and heterogeneous use cases currently documented is driving an upcoming release of more data and resources for the ATLAS Community and anyone interested to explore the world of experimental particle physics and the computer sciences through data analysis.

  14. Measurement of CP-violation parameters in decays of $B^0_s\\to J/\\psi\\phi$ with the ATLAS detector

    CERN Document Server

    Maevskiy, Artem; The ATLAS collaboration

    2016-01-01

    A measurement of the CP-violating weak phase phi_s and the B0s meson decay width difference with Bs->JpsiPhi decays in the ATLAS experiment is presented. It is based on integrated luminosity of 14.3 fb-1 collected by the ATLAS detector from 8 TeV pp collisions at the LHC. The measured values are statistically combined with those from 4.9 fb-1 of 7 TeV collisions data, yielding an overall Run-1 ATLAS result

  15. Cartea de Colorat a Experimentului ATLAS - ATLAS Experiment Colouring Book in Romanian

    CERN Multimedia

    Anthony, Katarina

    2018-01-01

    Language: Romanian - The ATLAS Experiment Colouring Book is a free-to-download educational book, ideal for kids aged 5-9. It aims to introduce children to the field of High-Energy Physics, as well as the work being carried out by the ATLAS Collaboration. Limba: Română - Cartea de Colorat a Experimentului ATLAS este o carte educativă gratuită, ideală pentru copiii cu vârsta cuprinsă între 5-9 ani. Scopul său este de a introduce copii în domeniul fizicii de înaltă energie, precum și activitatea desfășurată de colaborarea ATLAS.

  16. Performance of the NorduGrid ARC and the Dulcinea Executor in ATLAS Data Challenge 2

    DEFF Research Database (Denmark)

    Kleist, Josva; Eerola, P; Ekelöf, T.

    2004-01-01

    This talk describes the various stages of ATLAS Data Challenge 2 (DC2) in what concerns usage of resources deployed via NorduGrid's Advanced Resource Connector (ARC). It also describes the integration of these resources with the ATLAS production system using the Dulcinea executor. ATLAS Data...... Challenge 2 (DC2), run in 2004, was designed to be a step forward in the distributed data processing. In particular, much coordination of task assignment to resources was planned to be delegated to Grid in its different flavours. An automatic production management system was designed, to direct the tasks...... participation in ATLAS DC2. This was the first attempt to harness large amounts of strongly heterogeneous resources in various countries for a single collaborative exercise using Grid tools. This talk addresses various issues that arose during different stages of DC2 in this environment: preparation...

  17. Multi-threaded ATLAS simulation on Intel Knights Landing processors

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00014247; The ATLAS collaboration; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea

    2017-01-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with detai...

  18. Input Mezzanine Card for the Fast Tracker at ATLAS

    CERN Document Server

    Iizawa, Tomoya; The ATLAS collaboration

    2016-01-01

    The Fast Tracker (FTK) is an integral part of trigger upgrade program for the ATLAS experiment. At LHC Run 2, which started operations in June 2015 at a center-of-mass energy of 13 TeV, the luminosity could reach up to 2*1034 cm-2s-1 and an average of 40-50 simultaneous proton collisions per beam crossing will be expected. The higher luminosity demands a more sophisticated trigger system with increased use of tracking information. The Fast Tracker is a highly-parallel hardware system that rapidly finds and reconstructs tracks in the ATLAS inner-detector at the triggering stage. This paper focuses on the FTK Input Mezzanine Board that is input module of entire system. The functions of this board are to receive the insertable b-layer, pixel and micro-strip data from the ATLAS Silicon read-out drivers, perform clustering, and forward the data to its mother board. Mass production and quality control tests of Mezzanine Boards were completed, and staged installation and commissioning are ongoing. Details of its fun...

  19. An Open-Source Label Atlas Correction Tool and Preliminary Results on Huntingtons Disease Whole-Brain MRI Atlases.

    Science.gov (United States)

    Forbes, Jessica L; Kim, Regina E Y; Paulsen, Jane S; Johnson, Hans J

    2016-01-01

    The creation of high-quality medical imaging reference atlas datasets with consistent dense anatomical region labels is a challenging task. Reference atlases have many uses in medical image applications and are essential components of atlas-based segmentation tools commonly used for producing personalized anatomical measurements for individual subjects. The process of manual identification of anatomical regions by experts is regarded as a so-called gold standard; however, it is usually impractical because of the labor-intensive costs. Further, as the number of regions of interest increases, these manually created atlases often contain many small inconsistently labeled or disconnected regions that need to be identified and corrected. This project proposes an efficient process to drastically reduce the time necessary for manual revision in order to improve atlas label quality. We introduce the LabelAtlasEditor tool, a SimpleITK-based open-source label atlas correction tool distributed within the image visualization software 3D Slicer. LabelAtlasEditor incorporates several 3D Slicer widgets into one consistent interface and provides label-specific correction tools, allowing for rapid identification, navigation, and modification of the small, disconnected erroneous labels within an atlas. The technical details for the implementation and performance of LabelAtlasEditor are demonstrated using an application of improving a set of 20 Huntingtons Disease-specific multi-modal brain atlases. Additionally, we present the advantages and limitations of automatic atlas correction. After the correction of atlas inconsistencies and small, disconnected regions, the number of unidentified voxels for each dataset was reduced on average by 68.48%.

  20. ATLAS reach for Quarkonium production and polarization measurements

    CERN Document Server

    Etzion, Erez; 8th International Conference on Hyperons, Charm and Beauty Hadrons

    2009-01-01

    The ATLAS detector at CERN's LHC is preparing to take data from the first proton-proton collisions expected in the next few months. We report on the analysis of simulated data samples for production of heavy Quarkonium states J/psi and Upsilon, corresponding to an integrated luminosity of 10 pb^-1 with center of mass energy of 14 TeV expected at the early ATLAS data. We review various aspects of prompt Quarkonium production at LHC: the accessible ranges in transverse momentum and pseudorapidity, spin alignment of vector states, separation of color octet and color singlet production mechanism and feasibility of observing radiative decays Xi_c and Xi_b decays. Strategies of various measurements are outlined and methods of separating promptly produced J/psi and Upsilon mesons from various backgrounds are discussed.

  1. Design and implementation of the ATLAS TRT front end electronics

    Science.gov (United States)

    Newcomer, Mitch; Atlas TRT Collaboration

    2006-07-01

    The ATLAS TRT subsystem is comprised of 380,000 4 mm straw tube sensors ranging in length from 30 to 80 cm. Polypropelene plastic layers between straws and a xenon-based gas mixture in the straws allow the straws to be used for both tracking and transition radiation detection. Detector-mounted electronics with data sparsification was chosen to minimize the cable plant inside the super-conducting solenoid of the ATLAS inner tracker. The "on detector" environment required a small footprint, low noise, low power and radiation-tolerant readout capable of triggering at rates up to 20 MHz with an analog signal dynamic range of >300 times the discriminator setting. For tracking, a position resolution better than 150 μm requires leading edge trigger timing with ˜1 ns precision and for transition radiation detection, a charge collection time long enough to integrate the direct and reflected signal from the unterminated straw tube is needed for position-independent energy measurement. These goals have been achieved employing two custom Application-specific integrated circuits (ASICS) and board design techniques that successfully separate analog and digital functionality while providing an integral part of the straw tube shielding.

  2. Design and implementation of the ATLAS TRT front end electronics

    International Nuclear Information System (INIS)

    Newcomer, Mitch

    2006-01-01

    The ATLAS TRT subsystem is comprised of 380,000 4 mm straw tube sensors ranging in length from 30 to 80 cm. Polypropelene plastic layers between straws and a xenon-based gas mixture in the straws allow the straws to be used for both tracking and transition radiation detection. Detector-mounted electronics with data sparsification was chosen to minimize the cable plant inside the super-conducting solenoid of the ATLAS inner tracker. The 'on detector' environment required a small footprint, low noise, low power and radiation-tolerant readout capable of triggering at rates up to 20 MHz with an analog signal dynamic range of >300 times the discriminator setting. For tracking, a position resolution better than 150 μm requires leading edge trigger timing with ∼1 ns precision and for transition radiation detection, a charge collection time long enough to integrate the direct and reflected signal from the unterminated straw tube is needed for position-independent energy measurement. These goals have been achieved employing two custom Application-specific integrated circuits (ASICS) and board design techniques that successfully separate analog and digital functionality while providing an integral part of the straw tube shielding

  3. Computational and mathematical methods in brain atlasing.

    Science.gov (United States)

    Nowinski, Wieslaw L

    2017-12-01

    Brain atlases have a wide range of use from education to research to clinical applications. Mathematical methods as well as computational methods and tools play a major role in the process of brain atlas building and developing atlas-based applications. Computational methods and tools cover three areas: dedicated editors for brain model creation, brain navigators supporting multiple platforms, and atlas-assisted specific applications. Mathematical methods in atlas building and developing atlas-aided applications deal with problems in image segmentation, geometric body modelling, physical modelling, atlas-to-scan registration, visualisation, interaction and virtual reality. Here I overview computational and mathematical methods in atlas building and developing atlas-assisted applications, and share my contribution to and experience in this field.

  4. Readout electronics development for the ATLAS silicon tracker

    International Nuclear Information System (INIS)

    Borer, K.; Beringer, J.; Anghinolfi, F.; Aspell, P.; Chilingarov, A.; Jarron, P.; Heijne, E.H.M.; Santiard, J.C.; Verweij, H.; Goessling, C.; Lisowski, B.; Reichold, A.; Bonino, R.; Clark, A.G.; Kambara, H.; La Marra, D.; Leger, A.; Wu, X.; Richeux, J.P.; Taylor, G.N.; Fedotov, M.; Kuper, E.; Velikzhanin, Yu.; Campbell, D.; Murray, P.; Seller, P.

    1995-01-01

    We present the status of the development of the readout electronics for the large area silicon tracker of the ATLAS experiment at the LHC, carried out by the CERN RD2 project. Our basic readout concept is to integrate a fast amplifier, analog memory, sparse data scan circuit and analog-to-digital convertor (ADC) on a single VLSI chip. This architecture will provide full analog information of charged particle hits associated unambiguously to one LHC beam crossing, which is expected to be at a frequency of 40 MHz. The expected low occupancy of the ATLAS inner silicon detectors allows us to use a low speed (5 MHz) on-chip ADC with a multiplexing scheme. The functionality of the fast amplifier and analog memory have been demonstrated with various prototype chips. Most recently we have successfully tested improved versions of the amplifier and the analog memory. A piecewise linear ADC has been fabricated and performed satisfactorily up to 5 MHz. A new chip including amplifier, analog memory, memory controller, ADC, and data buffer has been designed and submitted for fabrication and will be tested on a prototype of the ATLAS silicon tracker module with realistic electrical and mechanical constraints. (orig.)

  5. Wind Atlas for Egypt

    DEFF Research Database (Denmark)

    The results of a comprehensive, 8-year wind resource assessment programme in Egypt are presented. The objective has been to provide reliable and accurate wind atlas data sets for evaluating the potential wind power output from large electricityproducing wind turbine installations. The regional wind...... climates of Egypt have been determined by two independent methods: a traditional wind atlas based on observations from more than 30 stations all over Egypt, and a numerical wind atlas based on long-term reanalysis data and a mesoscale model (KAMM). The mean absolute error comparing the two methods is about...... 10% for two large-scale KAMM domains covering all of Egypt, and typically about 5% for several smaller-scale regional domains. The numerical wind atlas covers all of Egypt, whereas the meteorological stations are concentrated in six regions. The Wind Atlas for Egypt represents a significant step...

  6. Wind Atlas for Egypt

    DEFF Research Database (Denmark)

    Mortensen, Niels Gylling; Said Said, Usama; Badger, Jake

    2006-01-01

    The results of a comprehensive, 8-year wind resource assessment programme in Egypt are presented. The objective has been to provide reliable and accurate wind atlas data sets for evaluating the potential wind power output from large electricityproducing wind turbine installations. The regional wind...... climates of Egypt have been determined by two independent methods: a traditional wind atlas based on observations from more than 30 stations all over Egypt, and a numerical wind atlas based on long-term reanalysis data and a mesoscale model (KAMM). The mean absolute error comparing the two methods is about...... 10% for two large-scale KAMM domains covering all of Egypt, and typically about 5% for several smaller-scale regional domains. The numerical wind atlas covers all of Egypt, whereas the meteorological stations are concentrated in six regions. The Wind Atlas for Egypt represents a significant step...

  7. Recent aging studies for the ATLAS transition radiation tracker

    CERN Document Server

    Capéans-Garrido, M; Anghinolfi, F; Arik, E; Baker, O K; Baron, S; Benjamin, D; Bertelsen, H; Bondarenko, V; Bychkov, V; Callahan, J; Cardiel-Sas, L; Catinaccio, A; Cetin, S A; Cwetanski, Peter; Dam, M; Danielsson, H; Dittus, F; Dologshein, B; Dressnandt, N; Driouichi, C; Ebenstein, W L; Eerola, Paule Anna Mari; Farthouat, Philippe; Fedin, O; Froidevaux, D; Gagnon, P; Grichkevitch, Y; Grigalashvili, N S; Hajduk, Z; Hansen, P; Kayumov, F; Keener, P T; Kekelidze, G D; Khristatchev, A; Konovalov, S; Koudine, L; Kovalenko, S; Kowalski, T; Kramarenko, V A; Krüger, K; Laritchev, A; Lichard, P; Luehring, F C; Lundberg, B; Maleev, V; Markina, I; McFarlane, K W; Mialkovski, V; Mindur, B; Mitsou, V A; Morozov, S; Munar, A; Muraviev, S; Nadtochy, A; Newcorner, F M; Ogren, H; Oh, S H; Olszowska, J; Passmore, S; Patritchev, S; Peshekhonov, V D; Petti, R; Price, M; Rembser, C; Rohne, O; Romaniouk, A; Rust, D R; Ryabov, Yu; Ryzhov, V; Shchegelskii, V; Seliverstov, D M; Shin, T; Shmeleva, A; Smirnov, S; Sosnovtsev, V V; Soutchkov, V; Spiridenkov, E; Szczygiel, R; Tikhomirov, V; Van Berg, R; Vassilakopoulos, V I; Vassilieva, L; Wang, C; Williams, H H; Zalite, A

    2004-01-01

    The transition radiation tracker (TRT) is one of the three subsystems of the inner detector of the ATLAS experiment. It is designed to operate for 10 yr at the LHC, with integrated charges of similar to 10 C/cm of wire and radiation doses of about 10 Mrad and 2 multiplied by 10**1**4 neutrons/cm**2. These doses translate into unprecedented ionization currents and integrated charges for a large-scale gaseous detector. This paper describes studies leading to the adoption of a new ionization gas regime for the ATLAS TRT. In this new regime, the primary gas mixture is 70%Xe-27%CO**2-3%O**2. It is planned to occasionally flush and operate the TRT detector with an Ar-based ternary mixture, containing a small percentage of CF**4, to remove, if needed, silicon pollution from the anode wires. This procedure has been validated in realistic conditions and would require a few days of dedicated operation. This paper covers both performance and aging studies with the new TRT gas mixture. 12 Refs.

  8. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    CERN Document Server

    Van der Ster , D; Medrano Llamas, R; Legger , F; Sciaba, A; Sciacca, G; Ubeda Garca , M

    2012-01-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion p...

  9. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion ...

  10. Recent ATLAS Articles on WLAP

    CERN Multimedia

    Goldfarb, S.

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: June ATLAS Plenary Meeting Tutorial on Physics EDM and Tools (June) Freiburg Overview Week Ketevi Assamagan's Tutorial on Analysis Tools Click here to browse WLAP for all ATLAS lectures.

  11. Estimate of the neutron fields in ATLAS based on ATLAS-MPX detectors data

    International Nuclear Information System (INIS)

    Bouchami, J; Dallaire, F; Gutierrez, A; Idarraga, J; Leroy, C; Picard, S; Scallon, O; Kral, V; PospIsil, S; Solc, J; Suk, M; Turecek, D; Vykydal, Z; Zemlieka, J

    2011-01-01

    The ATLAS-MPX detectors are based on Medipix2 silicon devices designed by CERN for the detection of different types of radiation. These detectors are covered with converting layers of 6 LiF and polyethylene (PE) to increase their sensitivity to thermal and fast neutrons, respectively. These devices allow the measurement of the composition and spectroscopic characteristics of the radiation field in ATLAS, particularly of neutrons. These detectors can operate in low or high preset energy threshold mode. The signature of particles interacting in a ATLAS-MPX detector at low threshold are clusters of adjacent pixels with different size and form depending on their type, energy and incidence angle. The classification of particles into different categories can be done using the geometrical parameters of these clusters. The Medipix analysis framework (MAFalda) - based on the ROOT application - allows the recognition of particle tracks left in ATLAS-MPX devices located at various positions in the ATLAS detector and cavern. The pattern recognition obtained from the application of MAFalda was configured to distinguish the response of neutrons from other radiation. The neutron response at low threshold is characterized by clusters of adjoining pixels (heavy tracks and heavy blobs) left by protons and heavy ions resulting from neutron interactions in the converting layers of the ATLAS-MPX devices. The neutron detection efficiency of ATLAS-MPX devices has been determined by the exposure of two detectors of reference to radionuclide sources of neutrons ( 252 Cf and 241 AmBe). With these results, an estimate of the neutrons fields produced at the devices locations during ATLAS operation was done.

  12. Multi-threaded ATLAS simulation on Intel Knights Landing processors

    Science.gov (United States)

    Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration

    2017-10-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.

  13. Teaching science with technology: Using EPA's EnviroAtlas in ...

    Science.gov (United States)

    Background/Question/Methods U.S. EPA’s EnviroAtlas provides a collection of web-based, interactive tools and resources for exploring ecosystem goods and services. EnviroAtlas contains two primary tools: An Interactive Map, which provides access to 300+ maps at multiple extents for the U.S., and an Eco-Health Relationship Browser, which displays evidence from hundreds of scientific publications on the linkages between ecosystems, the services they provide, and human health. EnviroAtlas is readily available, only requires an internet browser to use, and can be used by anyone with some introduction, which this session will provide. This session introduces an educational curriculum that has been designed for use with the tools in EnviroAtlas. The curriculum contains three lesson plan packages for varying grade levels: Exploring Your Watershed for 4th and 5th grades, Making Connections Between Ecosystems and Human Health for 7th-12th grades, and a lesson that encourages students to be collaborative decision-makers in a role-playing exercise that integrates ecology, public health, and city-planning in Building a Greenway Case Study for high school and undergraduate classes. All lesson plans are free and available for download. Results/Conclusions These educational activities encourage critical thinking and engage students and community users in a variety of ways, including physical engagement and technological exploration of their local environment and communities.

  14. Complete distributed computing environment for a HEP experiment: experience with ARC-connected infrastructure for ATLAS

    International Nuclear Information System (INIS)

    Read, A; Taga, A; O-Saada, F; Pajchel, K; Samset, B H; Cameron, D

    2008-01-01

    Computing and storage resources connected by the Nordugrid ARC middleware in the Nordic countries, Switzerland and Slovenia are a part of the ATLAS computing Grid. This infrastructure is being commissioned with the ongoing ATLAS Monte Carlo simulation production in preparation for the commencement of data taking in 2008. The unique non-intrusive architecture of ARC, its straightforward interplay with the ATLAS Production System via the Dulcinea executor, and its performance during the commissioning exercise is described. ARC support for flexible and powerful end-user analysis within the GANGA distributed analysis framework is also shown. Whereas the storage solution for this Grid was earlier based on a large, distributed collection of GridFTP-servers, the ATLAS computing design includes a structured SRM-based system with a limited number of storage endpoints. The characteristics, integration and performance of the old and new storage solutions are presented. Although the hardware resources in this Grid are quite modest, it has provided more than double the agreed contribution to the ATLAS production with an efficiency above 95% during long periods of stable operation

  15. Complete distributed computing environment for a HEP experiment: experience with ARC-connected infrastructure for ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Read, A; Taga, A; O-Saada, F; Pajchel, K; Samset, B H; Cameron, D [Department of Physics, University of Oslo, P.b. 1048 Blindern, N-0316 Oslo (Norway)], E-mail: a.l.read@fys.uio.no

    2008-07-15

    Computing and storage resources connected by the Nordugrid ARC middleware in the Nordic countries, Switzerland and Slovenia are a part of the ATLAS computing Grid. This infrastructure is being commissioned with the ongoing ATLAS Monte Carlo simulation production in preparation for the commencement of data taking in 2008. The unique non-intrusive architecture of ARC, its straightforward interplay with the ATLAS Production System via the Dulcinea executor, and its performance during the commissioning exercise is described. ARC support for flexible and powerful end-user analysis within the GANGA distributed analysis framework is also shown. Whereas the storage solution for this Grid was earlier based on a large, distributed collection of GridFTP-servers, the ATLAS computing design includes a structured SRM-based system with a limited number of storage endpoints. The characteristics, integration and performance of the old and new storage solutions are presented. Although the hardware resources in this Grid are quite modest, it has provided more than double the agreed contribution to the ATLAS production with an efficiency above 95% during long periods of stable operation.

  16. Persistent Data Layout and Infrastructure for Efficient Selective Retrieval of Event Data in ATLAS

    CERN Document Server

    INSPIRE-00084279; Malon, David

    2011-01-01

    The ATLAS detector at CERN has completed its first full year of recording collisions at 7 TeV, resulting in billions of events and petabytes of data. At these scales, physicists must have the capability to read only the data of interest to their analyses, with the importance of efficient selective access increasing as data taking continues. ATLAS has developed a sophisticated event-level metadata infrastructure and supporting I/O framework allowing event selections by explicit specification, by back navigation, and by selection queries to a TAG database via an integrated web interface. These systems and their performance have been reported on elsewhere. The ultimate success of such a system, however, depends significantly upon the efficiency of selective event retrieval. Supporting such retrieval can be challenging, as ATLAS stores its event data in column-wise orientation using ROOT trees for a number of reasons, including compression considerations, histogramming use cases, and more. For 2011 data, ATLAS wi...

  17. ATLAS@Home looks for CERN volunteers

    CERN Multimedia

    Rosaria Marraffino

    2014-01-01

    ATLAS@Home is a CERN volunteer computing project that runs simulated ATLAS events. As the project ramps up, the project team is looking for CERN volunteers to test the system before planning a bigger promotion for the public.   The ATLAS@home outreach website. ATLAS@Home is a large-scale research project that runs ATLAS experiment simulation software inside virtual machines hosted by volunteer computers. “People from all over the world offer up their computers’ idle time to run simulation programmes to help physicists extract information from the large amount of data collected by the detector,” explains Claire Adam Bourdarios of the ATLAS@Home project. “The ATLAS@Home project aims to extrapolate the Standard Model at a higher energy and explore what new physics may look like. Everything we’re currently running is preparation for next year's run.” ATLAS@Home became an official BOINC (Berkeley Open Infrastructure for Network ...

  18. The Silicon Microstrip Sensors of the ATLAS SemiConductor Tracker

    CERN Document Server

    Ahmad, A; Allport, P P; Alonso, J; Andricek, L; Apsimon, R J; Barr, A J; Bates, R L; Beck, G A; Bell, P J; Belymam, A; Benes, J; Berg, C M; Bernabeu, J; Bethke, S; Bingefors, N; Bizzell, J P; Bohm, J; Brenner, R; Brodbeck, T J; Bruckman De Renstrom, P; Buttar, C M; Campbell, D; Carpentieri, C; Carter, A A; Carter, J R; Charlton, D G; Casse, G-L; Chilingarov, A; Cindro, V; Ciocio, A; Civera, J V; Clark, A G; Colijn, A-P; Costa, M J; Dabrowski, W; Danielsen, K M; Dawson, I; Demirkoz, B; Dervan, P; Dolezal, Z; Dorholt, O; Duerdoth, I P; Dwuznik, M; Eckert, S; Ekelöf, T; Eklund, L; Escobar, C; Fasching, D; Feld, L; Ferguson, D P S; Ferrere, D; Fortin, R; Foster, J M; Fox, H; French, R; Fromant, B P; Fujita, K; Fuster, J; Gadomski, S; Gallop, B J; Garcia, C; Garcia-Navarro, J E; Gibson, M D; Gonzalez, S; Gonzalez-Sevilla, S; Goodrick, M J; Gornicki, E; Green, C; Greenall, A; Grigson, C; Grillo, A A; Grosse-Knetter, J; Haber, C; Handa, T; Hara, K; Harper, R S; Hartjes, F G; Hashizaki, T; Hauff, D; Hessey, N P; Hill, J C; Hollins, T I; Holt, S; Horazdovsky, T; Hornung, M; Hovland, K M; Hughes, G; Huse, T; Ikegami, Y; Iwata, Y; Jackson, J N; Jakobs, K; Jared, R C; Johansen, L G; Jones, R W L; Jones, T J; de Jong, P; Joseph, J; Jovanovic, P; Kaplon, J; Kato, Y; Ketterer, C; Kindervaag, I M; Kodys, P; Koffeman, E; Kohriki, T; Kohout, Z; Kondo, T; Koperny, S; van der Kraaij, E; Kral, V; Kramberger, G; Kudlaty, J; Lacasta, C; Limper, M; Linhart, V; Llosa, G; Lozano, M; Ludwig, I; Ludwig, J; Lutz, G; Macpherson, A; McMahon, S J; Macina, D; Magrath, C A; Malecki, P; Mandic, I; Marti-Garcia, S; Matsuo, T; Meinhardt, J; Mellado, B; Mercer, I J; Mikestikova, M; Mikuz, M; Minano, M; Mistry, J; Mitsou, V; Modesto, P; Mohn, B; Molloy, S D; Moorhead, G; Moraes, A; Morgan, D; Morone, M C; Morris, J; Moser, H-G; Moszczynski, A; Muijs, A J M; Nagai, K; Nakamura, Y; Nakano, I; Nicholson, R; Niinikoski, T; Nisius, R; Ohsugi, T; O'Shea, V; Oye, O K; Parzefall, U; Pater, J R; Pernegger, H; Phillips, P W; Posisil, S; Ratoff, P N; Reznicek, P; Richardson, J D; Richter, R H; Robinson, D; Roe, S; Ruggiero, G; Runge, K; Sadrozinski, H F W; Sandaker, H; Schieck, J; Seiden, A; Shinma, S; Siegrist, J; Sloan, T; Smith, N A; Snow, S W; Solar, M; Solberg, A; Sopko, B; Sospedra, L; Spieler, H; Stanecka, E; Stapnes, S; Stastny, J; Stelzer, F; Stradling, A; Stugu, B; Takashima, R; Tanaka, R; Taylor, G; Terada, S; Thompson, R J; Titov, M; Tomeda, Y; Tovey, D R; Turala, M; Turner, P R; Tyndel, M; Ullan, M; Unno, Y; Vickey, T; Vos, M; Wallny, R; Weilhammer, P; Wells, P S; Wilson, J A; Wolter, M; Wormald, M; Wu, S L; Yamashita, T; Zontar, D; Zsenei, A

    2007-01-01

    This paper describes the AC-coupled, single-sided, p-in-n silicon microstrip sensors used in the SemiConductor Tracker (SCT) of the ATLAS experiment at the CERN Large Hadron Collider (LHC). The sensor requirements, specifications and designs are discussed, together with the qualification and quality assurance procedures adopted for their production. The measured sensor performance is presented, both initially and after irradiation to the fluence anticipated after 10 years of LHC operation. The sensors are now successfully assembled within the detecting modules of the SCT, and the SCT tracker is completed and integrated within the ATLAS Inner Detector. Hamamatsu Photonics Ltd supplied 92.2% of the 15,392 installed sensors, with the remainder supplied by CiS.

  19. The ATLAS Liquid Argon Electromagnetic Calorimeter Construction, commissioning and elected test beam results

    CERN Document Server

    Hervás, L

    2004-01-01

    The construction of the ATLAS Liquid Argon Electromagnetic Calorimeter has been completed and commissioning is in progress to prepare the cryostats for lowering into the ATLAS pit. After a brief description of the detector, its construction and readout electronics, this paper summarizes results of quality checks (electrical, connectivity) carried out during the integration of the calorimeter wheels into the cryostats. We present also selected results of its performance, such as linearity, energy resolution, timing resolution, uniformity of the energy response, obtained in beam tests with several series modules. 16 Refs.

  20. Implementation of the ATLAS trigger within the ATLAS Multi­Threaded Software Framework AthenaMT

    CERN Document Server

    Wynne, Benjamin; The ATLAS collaboration

    2016-01-01

    We present an implementation of the ATLAS High Level Trigger that provides parallel execution of trigger algorithms within the ATLAS multi­threaded software framework, AthenaMT. This development will enable the ATLAS High Level Trigger to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data­taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the High Level Trigger input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that process events independently, executing algorithms sequentially in each process. AthenaMT will provide a fully multi­threaded env...

  1. ATLAS pixel IBL modules construction experience and developments for future upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Gaudiello, A.

    2015-10-01

    The first upgrade of the ATLAS Pixel Detector is the Insertable B-Layer (IBL), installed in May 2014 in the core of ATLAS. Two different silicon sensor technologies, planar n-in-n and 3D, are used. Sensors are connected with the new generation 130 nm IBM CMOS FE-I4 read-out chip via solder bump-bonds. Production quality control tests were set up to verify and rate the performance of the modules before integration into staves. An overview of module design and construction, the quality control results and production yield will be discussed, as well as future developments foreseen for future detector upgrades.

  2. First Results from the Online Radiation Dose Monitoring System in ATLAS experiment

    CERN Document Server

    Mandić, I; The ATLAS collaboration; Deliyergiyev, M; Gorišek, A; Kramberger, G; Mikuž, M; Franz, S; Hartert, J; Dawson, I; Miyagawa, P; Nicolas, L

    2011-01-01

    High radiation doses which will accumulate in components of ATLAS experiment during data taking will causes damage to detectors and readout electronics. It is therefore important to continuously monitor the doses to estimate the level of degradation caused by radiation. Online radiation monitoring system measures ionizing dose in SiO2 , displacement damage in silicon in terms of 1-MeV(Si) equivalent neutron fluence and fluence of thermal neutrons at several locations in ATLAS detector. In this paper design of the system, results of measurements and comparison of measured integrated doses and fluences with predictions from FLUKA simulation will be shown.

  3. Report to users of ATLAS

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1997-03-01

    This report covers the following topics: (1) status of the ATLAS accelerator; (2) progress in R and D towards a proposal for a National ISOL Facility; (3) highlights of recent research at ATLAS; (4) the move of gammasphere from LBNL to ANL; (5) Accelerator Target Development laboratory; (6) Program Advisory Committee; (7) ATLAS User Group Executive Committee; and (8) ATLAS user handbook available in the World Wide Web. A brief summary is given for each topic

  4. Estimate of the neutron fields in ATLAS based on ATLAS-MPX detectors data

    Energy Technology Data Exchange (ETDEWEB)

    Bouchami, J; Dallaire, F; Gutierrez, A; Idarraga, J; Leroy, C; Picard, S; Scallon, O [Universite de Montreal, Montreal, Quebec H3C 3J7 (Canada); Kral, V; PospIsil, S; Solc, J; Suk, M; Turecek, D; Vykydal, Z; Zemlieka, J, E-mail: scallon@lps.umontreal.ca [Institute of Experimental and Applied Physics of the CTU in Prague, Horska 3a/22, CZ-12800 Praha2 - Albertov (Czech Republic)

    2011-01-15

    The ATLAS-MPX detectors are based on Medipix2 silicon devices designed by CERN for the detection of different types of radiation. These detectors are covered with converting layers of {sup 6}LiF and polyethylene (PE) to increase their sensitivity to thermal and fast neutrons, respectively. These devices allow the measurement of the composition and spectroscopic characteristics of the radiation field in ATLAS, particularly of neutrons. These detectors can operate in low or high preset energy threshold mode. The signature of particles interacting in a ATLAS-MPX detector at low threshold are clusters of adjacent pixels with different size and form depending on their type, energy and incidence angle. The classification of particles into different categories can be done using the geometrical parameters of these clusters. The Medipix analysis framework (MAFalda) - based on the ROOT application - allows the recognition of particle tracks left in ATLAS-MPX devices located at various positions in the ATLAS detector and cavern. The pattern recognition obtained from the application of MAFalda was configured to distinguish the response of neutrons from other radiation. The neutron response at low threshold is characterized by clusters of adjoining pixels (heavy tracks and heavy blobs) left by protons and heavy ions resulting from neutron interactions in the converting layers of the ATLAS-MPX devices. The neutron detection efficiency of ATLAS-MPX devices has been determined by the exposure of two detectors of reference to radionuclide sources of neutrons ({sup 252}Cf and {sup 241}AmBe). With these results, an estimate of the neutrons fields produced at the devices locations during ATLAS operation was done.

  5. Estimate of the neutron fields in ATLAS based on ATLAS-MPX detectors data

    Science.gov (United States)

    Bouchami, J.; Dallaire, F.; Gutiérrez, A.; Idarraga, J.; Král, V.; Leroy, C.; Picard, S.; Pospíšil, S.; Scallon, O.; Solc, J.; Suk, M.; Turecek, D.; Vykydal, Z.; Žemlièka, J.

    2011-01-01

    The ATLAS-MPX detectors are based on Medipix2 silicon devices designed by CERN for the detection of different types of radiation. These detectors are covered with converting layers of 6LiF and polyethylene (PE) to increase their sensitivity to thermal and fast neutrons, respectively. These devices allow the measurement of the composition and spectroscopic characteristics of the radiation field in ATLAS, particularly of neutrons. These detectors can operate in low or high preset energy threshold mode. The signature of particles interacting in a ATLAS-MPX detector at low threshold are clusters of adjacent pixels with different size and form depending on their type, energy and incidence angle. The classification of particles into different categories can be done using the geometrical parameters of these clusters. The Medipix analysis framework (MAFalda) — based on the ROOT application — allows the recognition of particle tracks left in ATLAS-MPX devices located at various positions in the ATLAS detector and cavern. The pattern recognition obtained from the application of MAFalda was configured to distinguish the response of neutrons from other radiation. The neutron response at low threshold is characterized by clusters of adjoining pixels (heavy tracks and heavy blobs) left by protons and heavy ions resulting from neutron interactions in the converting layers of the ATLAS-MPX devices. The neutron detection efficiency of ATLAS-MPX devices has been determined by the exposure of two detectors of reference to radionuclide sources of neutrons (252Cf and 241AmBe). With these results, an estimate of the neutrons fields produced at the devices locations during ATLAS operation was done.

  6. ATLAS Colouring Book

    CERN Multimedia

    Anthony, Katarina

    2016-01-01

    The ATLAS Experiment Colouring Book is a free-to-download educational book, ideal for kids aged 5-9. It aims to introduce children to the field of High-Energy Physics, as well as the work being carried out by the ATLAS Collaboration.

  7. A very special visit to ATLAS: America's Cup Winner Team Alinghi

    CERN Multimedia

    Jenni, P

    It is an honour for ATLAS to frequently welcome in its cavern and the assembly sites VIP visits by Heads of State, Ministers, Directors of Funding Agencies and other political dignitaries. Rarely, however, have we had such an illustrious and competent visitor group as on December 3rd, 2003, when the full Research and Design Team from the Swiss America's Cup Team Alinghi looked at the ATLAS integration work in Halls 180 and 191 and visited Pit-1. The Team was led by 'their' Technical Coordinator Grant Simmer and principal designer Rolf Vrolijk. The Alinghi R&D team spans a very broad range of engineering and management competence; just to list a few of the team's special skills: mechanical and material engineering, electronics and software engineering, sail design, construction management, performance analysis and predictions, and last but not least direct feedback from the actual sailing team (strategist Murray Jones). Amazingly there are a lot of commonalities between Team Alinghi and ATLAS which made...

  8. Alignment of the ATLAS Inner Detector in the LHC Run II

    CERN Document Server

    Barranco Navarro, Laura; The ATLAS collaboration

    2015-01-01

    ATLAS physics goals require excellent resolution, unbiased measurement of all charged particle kinematic parameters. These critically depend on the layout and performance of the tracking system and on the quality of its offline alignment. ATLAS is equipped with a tracking system built using different technologies, silicon planar sensors (pixel and micro-strip) and gaseous drift- tubes, all embedded in a 2T solenoidal magnetic field. For the Run II of the LHC, the system was upgraded with the installation of a new pixel layer, the Insertable B-layer (IBL). An outline of the track based alignment approach and its implementation within the ATLAS software will be presented. Special attention will be paid to integration of the IBL into the alignment framework, techniques allowing to identify and eliminate tracking systematics as well as strategies to deal with time-dependent alignment. Performance from the commissioning of Cosmic data and potentially early LHC Run II proton-proton collisions will be discussed.

  9. Virtualization of the ATLAS software environment on a shared HPC system

    CERN Document Server

    Schnoor, Ulrike; The ATLAS collaboration

    2017-01-01

    High-Performance Computing (HPC) and other research cluster computing resources provided by universities can be useful supplements to the collaboration’s own WLCG computing resources for data analysis and production of simulated event samples. The shared HPC cluster "NEMO" at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines incorporating the ATLAS software environment analogously to a WLCG center. The talk describes the concept and implementation of virtualizing the ATLAS software environment to run both data analysis and production on the HPC host system which is connected to the existing Tier-3 infrastructure. Main challenges include the integration into the NEMO and Tier-3 schedulers in a dynamic, on-demand way, the scalability of the OpenStack infrastructure, as well as the automatic generation of a fully functional virtual machine image providing access to the local user environment, the dCache storage element and the parallel file sys...

  10. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is first roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit

  11. ATLAS MPGD production status

    CERN Document Server

    Schioppa, Marco; The ATLAS collaboration

    2018-01-01

    Micromegas (MICRO MEsh GAseous Structure) chambers are Micro-Pattern Gaseous Detectors designed to provide a high spatial resolution and reasonable good time resolution in highly irradiated environments. In 2007 an ambitious long-term R\\&D activity was started in the context of the ATLAS experiment, at CERN: the Muon ATLAS Micromegas Activity (MAMMA). After years of tests on prototypes and technology breakthroughs, Micromegas chambers were chosen as tracking detectors for an upgrade of the ATLAS Muon Spectrometer. These novel detectors will be installed in 2020 at the end of the second long shutdown of the Large Hadron Collider, and will serve mainly as precision detectors in the innermost part of the forward ATLAS Muon Spectrometer. Four different types of Micromegas modules, eight layers each, up to $3 m^2$ area (of unprecedented size), will cover a surface of $150 m^2$ for a total active area of about $1200 m^2$. With this upgrade the ATLAS muon system will maintain the full acceptance of its excellent...

  12. ATLAS' major cooling project

    CERN Multimedia

    2005-01-01

    In 2005, a considerable effort has been put into commissioning the various units of ATLAS' complex cryogenic system. This is in preparation for the imminent cooling of some of the largest components of the detector in their final underground configuration. The liquid helium and nitrogen ATLAS refrigerators in USA 15. Cryogenics plays a vital role in operating massive detectors such as ATLAS. In many ways the liquefied argon, nitrogen and helium are the life-blood of the detector. ATLAS could not function without cryogens that will be constantly pumped via proximity systems to the superconducting magnets and subdetectors. In recent weeks compressors at the surface and underground refrigerators, dewars, pumps, linkages and all manner of other components related to the cryogenic system have been tested and commissioned. Fifty metres underground The helium and nitrogen refrigerators, installed inside the service cavern, are an important part of the ATLAS cryogenic system. Two independent helium refrigerators ...

  13. ATLAS: Exceeding all expectations

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    “One year ago it would have been impossible for us to guess that the machine and the experiments could achieve so much so quickly”, says Fabiola Gianotti, ATLAS spokesperson. The whole chain – from collision to data analysis – has worked remarkably well in ATLAS.   The first LHC proton run undoubtedly exceeded expectations for the ATLAS experiment. “ATLAS has worked very well since the beginning. Its overall data-taking efficiency is greater than 90%”, says Fabiola Gianotti. “The quality and maturity of the reconstruction and simulation software turned out to be better than we expected for this initial stage of the experiment. The Grid is a great success, and right from the beginning it has allowed members of the collaboration all over the world to participate in the data analysis in an effective and timely manner, and to deliver physics results very quickly”. In just a few months of data taking, ATLAS has observed t...

  14. Studying the Electroweak Sector with the ATLAS Detector

    CERN Document Server

    Bittrich, Carsten; The ATLAS collaboration

    2018-01-01

    The large integrated luminosities that are available at the LHC, allow to test the gauge structure of the electroweak sector of the Standard Model to highest precision. In this talk, we review the latest results of the ATLAS collaboration involving di-boson and multiboson final states as well as the corresponding limits on anomalous gauge couplings. Moreover, we discuss the electroweak production of vector boson at 13 TeV. Another approach to test the consistency of the electroweak sector is via precision measurements. ATLAS has recently published a measurement of the tau-polarization in Z events as well as a three dimensional cross-section measurement of the Drell-Yan process. The latter allows for the extraction of the forward-backward asymmetry that can be interpreted as a measurement of the weak mixing angle. Both results will be presented and discussed.

  15. Performance of ATLAS RPC Level-1 Muon trigger during the 2015 data taking

    CERN Document Server

    Corradi, Massimo; The ATLAS collaboration

    2016-01-01

    The Level-1 Muon Barrel Trigger is one of the main elements of the event selection of the ATLAS experiment at the Large Hadron Collider. Its input stage consists of an array of processors receiving the full granularity of data from Resistive Plate Chambers in the central area of the ATLAS detector ("Barrel"). The trigger efficiency and the level of synchronisation of its elements with the rest of ATLAS and the LHC clock are crucial figures of this system: many parameters of the constituent RPC detector and the trigger electronics have to be constantly and carefully checked to assure a correct functioning of the Level-1 selection. Notwithstanding the complexity of such a large array of integrated RPC detectors, the ATLAS Level-1 system has resumed operations successfully after the past 2 year shutdown, with levels similar to those of Run 1. We present the inclusive monitoring of the RPC+L1 system that we have developed to characterise the behaviour of the system, using reconstructed muons in events selected by...

  16. Measurement of the multi-jet cross-sections with the ATLAS detector at the LHC

    CERN Document Server

    Zinonos, Zinonas

    Inclusive multi-jet production is studied using the ATLAS detector for proton-proton collisions with a center-of-mass energy of 7 TeV at the Large Hadron Collider at CERN. The data sample corresponds to an integrated luminosity of 2.4~pb$^{-1}$, using the first proton-proton data collected by the ATLAS detector in 2010. Results on multi-jet cross sections are presented and compared to both leading-order plus parton-shower Monte Carlo predictions and next-to-leading-order QCD calculations.

  17. Future ATLAS Higgs Studies

    CERN Document Server

    Smart, Ben; The ATLAS collaboration

    2017-01-01

    The High-Luminosity LHC will prove a challenging environment to work in, with for example $=200$ expected. It will however also provide great opportunities for advancing studies of the Higgs boson. The ATLAS detector will be upgraded, and Higgs prospects analyses have been performed to assess the reach of ATLAS Higgs studies in the HL-LHC era. These analyses are presented, as are Run-2 ATLAS di-Higgs analyses for comparison.

  18. Baby brain atlases.

    Science.gov (United States)

    Oishi, Kenichi; Chang, Linda; Huang, Hao

    2018-04-03

    The baby brain is constantly changing due to its active neurodevelopment, and research into the baby brain is one of the frontiers in neuroscience. To help guide neuroscientists and clinicians in their investigation of this frontier, maps of the baby brain, which contain a priori knowledge about neurodevelopment and anatomy, are essential. "Brain atlas" in this review refers to a 3D-brain image with a set of reference labels, such as a parcellation map, as the anatomical reference that guides the mapping of the brain. Recent advancements in scanners, sequences, and motion control methodologies enable the creation of various types of high-resolution baby brain atlases. What is becoming clear is that one atlas is not sufficient to characterize the existing knowledge about the anatomical variations, disease-related anatomical alterations, and the variations in time-dependent changes. In this review, the types and roles of the human baby brain MRI atlases that are currently available are described and discussed, and future directions in the field of developmental neuroscience and its clinical applications are proposed. The potential use of disease-based atlases to characterize clinically relevant information, such as clinical labels, in addition to conventional anatomical labels, is also discussed. Copyright © 2018. Published by Elsevier Inc.

  19. Glance Information System for ATLAS Management

    International Nuclear Information System (INIS)

    Grael, F F; Maidantchik, C; Évora, L H R A; Karam, K; Moraes, L O F; Cirilli, M; Nessi, M; Pommès, K

    2011-01-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  20. Glance Information System for ATLAS Management

    Science.gov (United States)

    Grael, F. F.; Maidantchik, C.; Évora, L. H. R. A.; Karam, K.; Moraes, L. O. F.; Cirilli, M.; Nessi, M.; Pommès, K.; ATLAS Collaboration

    2011-12-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  1. Development and test of the DAQ system for a Micromegas prototype to be installed in the ATLAS experiment

    CERN Document Server

    Zibell, Andre; The ATLAS collaboration; Bianco, Michele; Martoiu, Victor Sorin

    2015-01-01

    A Micromegas (MM) quadruplet prototype with an active area of 0.5 m 2 that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. The integration of this prototype detector into the ATLAS data acquisition system using custom ATCA equipment is presented. An ATLAS compatible Read Out Driver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used in order to transmit the data after generating valid event fragments to the high-level Read Out System (ROS). The SRU will be synchronized with the LHC bunch crossing clock (40.08 MHz) and will receive the Level-1 trigger signals from the Central Trigger Processor (CTP) through the TTCrx receiver ASIC. The configuration of the system will be driven directly from the ATLAS Run Control System. By using the ATLAS TDAQ Soft...

  2. The Future of Distributed Computing Systems in ATLAS: Boldly Venturing Beyond Grids

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2018-01-01

    The Production and Distributed Analysis system (PanDA) for the ATLAS experiment at the Large Hadron Collider has seen big changes over the past couple of years to accommodate new types of distributed computing resources: clouds, HPCs, volunteer computers and other external resources. While PanDA was originally designed for fairly homogeneous resources available through the Worldwide LHC Computing Grid, the new resources are heterogeneous, at diverse scales and with diverse interfaces. Up to a fifth of the resources available to ATLAS are of such new types and require special techniques for integration into PanDA. In this talk, we present the nature and scale of these resources. We provide an overview of the various challenges faced, spanning infrastructure, software distribution, workload requirements, scaling requirements, workflow management, data management, network provisioning, and associated software and computing facilities. We describe the strategies for integrating these heterogeneous resources into ...

  3. The Irish Wind Atlas

    Energy Technology Data Exchange (ETDEWEB)

    Watson, R [Univ. College Dublin, Dept. of Electronic and Electrical Engineering, Dublin (Ireland); Landberg, L [Risoe National Lab., Meteorology and Wind Energy Dept., Roskilde (Denmark)

    1999-03-01

    The development work on the Irish Wind Atlas is nearing completion. The Irish Wind Atlas is an updated improved version of the Irish section of the European Wind Atlas. A map of the irish wind resource based on a WA{sup s}P analysis of the measured data and station description of 27 measuring stations is presented. The results of previously presented WA{sup s}P/KAMM runs show good agreement with these results. (au)

  4. O Livro de Colorir da Experiência ATLAS - ATLAS Experiment Colouring Book in Portuguese

    CERN Multimedia

    Anthony, Katarina

    2017-01-01

    Language: Portuguese - The ATLAS Experiment Colouring Book is a free-to-download educational book, ideal for kids aged 5-9. It aims to introduce children to the field of High-Energy Physics, as well as the work being carried out by the ATLAS Collaboration. Língua: Português - O Livro de Colorir da Experiência ATLAS é um livro educacional gratuito para descarregar, ideal para crianças dos 5 aos 9 anos de idade. Este livro procura introduzir as crianças ao estudo da Física de Alta-Energia, bem como ao trabalho desenvolvido pela Colaboração ATLAS.

  5. Maľovanka Experiment ATLAS - ATLAS Experiment Colouring Book in Slovak

    CERN Multimedia

    Anthony, Katarina

    2017-01-01

    Language: Slovak - The ATLAS Experiment Colouring Book is a free-to-download educational book, ideal for kids aged 5-9. It aims to introduce children to the field of High-Energy Physics, as well as the work being carried out by the ATLAS Collaboration.

  6. ATLAS Deneyi Boyama Kitabı - ATLAS Experiment Colouring Book in Turkish

    CERN Multimedia

    Anthony, Katarina

    2018-01-01

    Language: Turkish - The ATLAS Experiment Colouring Book is a free-to-download educational book, ideal for kids aged 5-9. It aims to introduce children to the field of High-Energy Physics, as well as the work being carried out by the ATLAS Collaboration.

  7. Search for Long-lived particles with the ATLAS detector

    CERN Document Server

    Saito, Masahiko; The ATLAS collaboration

    2017-01-01

    Several supersymmetric models predict the production of meta-stable supersymmetric particles. Such particles, if charged, may be detected through disappearing tracks. The poster presents recent results from disappearing track analysis based on an integrated luminosity of 36.1 $\\mathrm{fb}^{-1}$ of $pp$ collisions at a centre of mass energy of 13 TeV with the ATLAS detector at the LHC.

  8. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    Marjanovic, Marija; The ATLAS collaboration

    2018-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibers to photo-multiplier tubes (PMTs), located in the outer part of the calorimeter. The readout is segmented into about 5000 cells, each one being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of the full readout chain during the data taking, a set of calibration sub-systems is used. The TileCal calibration system comprises Cesium radioactive sources, laser, charge injection elements, and an integrator based readout system. Combined information from all systems allows to monitor and to equalize the calorimeter response at each stage of the signal evolution, from scintillation light to digitization. Calibration runs are monitored from a data quality perspective and u...

  9. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    Cortes-Gonzalez, Arely; The ATLAS collaboration

    2017-01-01

    The ATLAS Tile Calorimeter is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes, located in the outer part of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two photomultiplier in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The calibration system comprises Cesium radioactive sources, laser, charge injection elements and an integrator based readout system. Combined information from all systems allows to monitor and equalise the calorimeter r...

  10. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    Boumediene, Djamel Eddine; The ATLAS collaboration

    2017-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs). PMT signals are then digitized at 40 MHz and stored on detector and are only transferred off detector once the first level trigger acceptance has been confirmed. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain, a set of calibration systems is used. The TileCal calibration system comprises Cesium radioactive sources, laser, charge injection elements and an integrator b...

  11. ATLAS Distributed Computing in LHC Run2

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  12. The Cerefy Neuroradiology Atlas: a Talairach-Tournoux atlas-based tool for analysis of neuroimages available over the internet.

    Science.gov (United States)

    Nowinski, Wieslaw L; Belov, Dmitry

    2003-09-01

    The article introduces an atlas-assisted method and a tool called the Cerefy Neuroradiology Atlas (CNA), available over the Internet for neuroradiology and human brain mapping. The CNA contains an enhanced, extended, and fully segmented and labeled electronic version of the Talairach-Tournoux brain atlas, including parcelated gyri and Brodmann's areas. To our best knowledge, this is the first online, publicly available application with the Talairach-Tournoux atlas. The process of atlas-assisted neuroimage analysis is done in five steps: image data loading, Talairach landmark setting, atlas normalization, image data exploration and analysis, and result saving. Neuroimage analysis is supported by a near-real-time, atlas-to-data warping based on the Talairach transformation. The CNA runs on multiple platforms; is able to process simultaneously multiple anatomical and functional data sets; and provides functions for a rapid atlas-to-data registration, interactive structure labeling and annotating, and mensuration. It is also empowered with several unique features, including interactive atlas warping facilitating fine tuning of atlas-to-data fit, navigation on the triplanar formed by the image data and the atlas, multiple-images-in-one display with interactive atlas-anatomy-function blending, multiple label display, and saving of labeled and annotated image data. The CNA is useful for fast atlas-assisted analysis of neuroimage data sets. It increases accuracy and reduces time in localization analysis of activation regions; facilitates to communicate the information on the interpreted scans from the neuroradiologist to other clinicians and medical students; increases the neuroradiologist's confidence in terms of anatomy and spatial relationships; and serves as a user-friendly, public domain tool for neuroeducation. At present, more than 700 users from five continents have subscribed to the CNA.

  13. ATLAS Muon Spectrometer Upgrades for the High Luminosity LHC

    CERN Document Server

    Valderanis, Chrysostomos; The ATLAS collaboration

    2015-01-01

    ATLAS Muon Spectrometer Upgrades for the High Luminosity LHC The luminosity of the LHC will increase up to 2x10^34 cm-2s-1 after the long shutdown in 2019 (phase-1 upgrade) and up to 7x10^34 cm-2s-1 after the long shutdown in 2025 (phase-2 upgrade). In order to cope with the increased particle fluxes, upgrades are envisioned for the ATLAS muon spectrometer. At phase-1, the current innermost stations of the ATLAS muon endcap tracking system (the Small Wheels) will be upgraded with 2x4-layer modules of Micromega detectors, sandwiched by two 4 layer modules of small strip Thin Gap Chambers on either side. Each 4-layer module of the so-called New Small Wheels covers a surface area of approximately 2 to 3 m2 for a total active area of 1200 m2 each for the two technologies. On such large area detectors, the mechanical precision (30 \\mu m along the precision coordinate and 80 \\mu m along the beam) is a key point and must be controlled and monitored along the process of construction and integration. The design and re...

  14. Preparing a new book on ATLAS

    CERN Multimedia

    Claudia Marcelloni de Oliveira

    A book about the ATLAS project and the ATLAS collaboration is going to be published and available for sale in mid 2008. The book is intended to be a symbol of appreciation for all the people from ATLAS institutes, triggering fond memories through photos, interviews, short commentaries and anecdotes about the daily life and milestones encountered while designing, constructing and completing ATLAS. We would like to give you the opportunity to collaborate with this project in two different ways: Firstly, please send us the best anecdotes related to ATLAS that you remember. To submit anecdotes, send an email to Claudia.Marcelloni@cern.ch. Secondly, you are invited to participate in our PHOTO COMPETITION. Please send the best photos you have of ATLAS attached with a description, the location, and date taken. The categories are: Milestones in the process of designing and building the detector, People at work and Important gatherings. To submit photos you should go to the CDS page and select ATLAS Photo Competi...

  15. Latest ATLAS results on $\\phi_s$

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00222462; The ATLAS collaboration

    2017-01-01

    New Physics effects beyond the predictions of the Standard Model may manifest in the $CP$-violation of $b$-hadron decays. This paper presents the latest analysis of $B^0_s \\to J/\\psi\\phi$ decay at the ATLAS experiment, measuring the $CP$-violating phase $\\phi_s$, the decay width $\\Gamma_s$ and the difference of widths between the mass eigenstates $\\Delta\\Gamma_s$. The latest results are using integrated luminosity of 14.3 fb$^{-1}$ collected by the ATLAS detector from $\\sqrt{s}$ = 8 TeV $pp$ collisions at the Large Hadron Collider, and are statistically combined with the results from 4.9 fb$^{-1}$ of $\\sqrt{s}$ = 7 TeV data, leading to: \\begin{eqnarray*} \\phi_s & = & -0.090 \\pm 0.078 \\;\\mathrm{(stat.)} \\pm 0.041 \\;\\mathrm{(syst.)~rad} ,\\;\\;\\\\ \\Delta\\Gamma_s & = & 0.085 \\pm 0.011 \\;\\mathrm{(stat.)} \\pm 0.007 \\;\\mathrm{(syst.)~ps}^{-1} ,\\;\\;\\\\ \\Gamma_s & = & 0.675 \\pm 0.003 \\;\\mathrm{(stat.)} \\pm 0.003 \\;\\mathrm{(syst.)~ps}^{-1}. \\end{eqnarray*} The results are also presented in the form...

  16. ATLAS B-physics potential

    International Nuclear Information System (INIS)

    Smizanska, M.

    2001-01-01

    Studies since 1993 have demonstrated the ability of ATLAS to pursue a wide B physics program. This document presents the latest performance studies with special stress on lepton identification. B-decays containing several leptons in ATLAS statistically dominate the high-precision measurements. We present new results on physics simulations of CP violation measurements in the B s 0 → J/Ψphi decay and on a novel ATLAS programme on beauty production in central proton-proton collisions of LHC

  17. Integrated monitoring of the ATLAS online computing farm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00389536; The ATLAS collaboration; Brasolin, Franco; Fazio, Daniel; Gament, Costin-Eugen; Lee, Christopher; Scannicchio, Diana; Twomey, Matthew Shaun

    2017-01-01

    The online farm of the ATLAS experiment at the LHC, consisting of nearly 4100 PCs with various characteristics, provides configuration and control of the detector and performs the collection, processing, selection and conveyance of event data from the front-end electronics to mass storage. The status and health of every host must be constantly monitored to ensure the correct and reliable operation of the whole online system. This is the first line of defense, which should not only promptly provide alerts in case of failure but, whenever possible, warn of impending issues. The monitoring system should be able to check up to 100000 health parameters and provide alerts on a selected subset. In this paper we present the implementation and validation of our new monitoring and alerting system based on Icinga 2 and Ganglia. We describe how the load distribution and high availability features of Icinga 2 allowed us to have a centralised but scalable system, with a configuration model that allows full flexibility whil...

  18. Integrated monitoring of the ATLAS online computing farm

    CERN Document Server

    Ballestrero, Sergio; The ATLAS collaboration; Fazio, Daniel; Gament, Costin-Eugen; Lee, Christopher; Scannicchio, Diana; Twomey, Matthew Shaun

    2016-01-01

    The online farm of the ATLAS experiment at the LHC, consisting of nearly 4000 PCs with various characteristics, provides configuration and control of the detector and performs the collection, processing, selection and conveyance of event data from the front-end electronics to mass storage. The status and health of every host must be constantly monitored to ensure the correct and reliable operation of the whole online system. This is the first line of defense, which should not only promptly provide alerts in case of failure but, whenever possible, warn of impending issues. The monitoring system should be able to check up to 100000 health parameters and provide alerts on a selected subset. In this paper we present the implementation and validation of our new monitoring and alerting system based on Icinga 2 and Ganglia. We describe how the load distribution and high availability features of Icinga 2 allowed us to have a centralised but scalable system, with a configuration model that allows full flexibility whil...

  19. ATLAS Award for Shield Supplier

    CERN Multimedia

    2004-01-01

    ATLAS technical coordinator Dr. Marzio Nessi presents the ATLAS supplier award to Vojtech Novotny, Director General of Skoda Hute.On 3 November, the ATLAS experiment honoured one of its suppliers, Skoda Hute s.r.o., of Plzen, Czech Republic, for their work on the detector's forward shielding elements. These huge and very massive cylinders surround the beampipe at either end of the detector to block stray particles from interfering with the ATLAS's muon chambers. For the shields, Skoda Hute produced 10 cast iron pieces with a total weight of 780 tonnes at a cost of 1.4 million CHF. Although there are many iron foundries in the CERN member states, there are only a limited number that can produce castings of the necessary size: the large pieces range in weight from 59 to 89 tonnes and are up to 1.5 metres thick.The forward shielding was designed by the ATLAS Technical Coordination in close collaboration with the ATLAS groups from the Czech Technical University and Charles University in Prague. The Czech groups a...

  20. Atlas of temporal variations - interdisciplinary scientific work

    Science.gov (United States)

    Gamburtsev, A. G.; Oleinik, O. V.

    2003-04-01

    The year 2002 will culminate in the publication of the third volume of the fundamental interdisciplinary work "Atlas of Temporal Variations in Natural, Anthropogenic and Social Processes", which now will comprise three volumes (1994, 1998, 2002). The Atlas has pooled the information on the main peculiarities of processes' behaviour in various natural and humanitarian spheres over the widest temporal and spatial range. The main scientific goal of the work consists in discovering the behaviour pattern of natural, anthropogenic and social processes and the cause and effect links between them. Thus, the Atlas contains extensive comparative generalisation from the vastly different data. For one thing, it is a fundamental work on the law-governed nature of evolution in natural and social spheres; for another, it can be used as a reference book and valuable source of information for research in different directions. The authors seek to treat every piece of information as part of an integrated whole. When analysing the data, we operate on the premise that surrounding nature, society and their elements are open dynamic systems. Systems of this kind exhibit non-linear characteristics and a tendency towards ordered and chaotic behaviour. These features are revealed in the course of the analysis of time series. The data processing procedures applied are unified, all processes being generally expressed in terms of their time series and time-spectral diagrams. The technique is aimed at determination of investigated parameters' rhythms and the analysis of their evolution. This approach enables us to show the dynamics of processes occurring in absolutely dissimilar objects and performs their comparative analysis, with particular emphasis placed on rhythms and trends. As a result successions of illustrations are obtained and formed the basis of the Atlas. The Atlas covers processes that occur in objects belonging to the lithosphere, atmosphere, hydrosphere and social sphere as well

  1. TU-CD-BRA-05: Atlas Selection for Multi-Atlas-Based Image Segmentation Using Surrogate Modeling

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: The growing size and heterogeneity in training atlas necessitates sophisticated schemes to identify only the most relevant atlases for the specific multi-atlas-based image segmentation problem. This study aims to develop a model to infer the inaccessible oracle geometric relevance metric from surrogate image similarity metrics, and based on such model, provide guidance to atlas selection in multi-atlas-based image segmentation. Methods: We relate the oracle geometric relevance metric in label space to the surrogate metric in image space, by a monotonically non-decreasing function with additive random perturbations. Subsequently, a surrogate’s ability to prognosticate the oracle order for atlas subset selection is quantified probabilistically. Finally, important insights and guidance are provided for the design of fusion set size, balancing the competing demands to include the most relevant atlases and to exclude the most irrelevant ones. A systematic solution is derived based on an optimization framework. Model verification and performance assessment is performed based on clinical prostate MR images. Results: The proposed surrogate model was exemplified by a linear map with normally distributed perturbation, and verified with several commonly-used surrogates, including MSD, NCC and (N)MI. The derived behaviors of different surrogates in atlas selection and their corresponding performance in ultimate label estimate were validated. The performance of NCC and (N)MI was similarly superior to MSD, with a 10% higher atlas selection probability and a segmentation performance increase in DSC by 0.10 with the first and third quartiles of (0.83, 0.89), compared to (0.81, 0.89). The derived optimal fusion set size, valued at 7/8/8/7 for MSD/NCC/MI/NMI, agreed well with the appropriate range [4, 9] from empirical observation. Conclusion: This work has developed an efficacious probabilistic model to characterize the image-based surrogate metric on atlas selection

  2. Measurement of the W-boson mass at the ATLAS experiment

    CERN Document Server

    Kivernyk, Oleh; The ATLAS collaboration

    2018-01-01

    We present the results of $W$-boson mass measurements with the ATLAS detector at the LHC based on the 2011 data-set recorded at a centre-of-mass energy of $\\sqrt{s} =7$~\\TeV, and corresponding to 4.6~fb$^{-1}$ of integrated luminosity. The selected data sample consists of 7.8$\\times 10^6$ $W\\rightarrow \\mu\

  3. Radiation Damage Modeling for 3D Pixel Sensors in the ATLAS Detector

    CERN Document Server

    Wallangen, Veronica; The ATLAS collaboration

    2017-01-01

    Silicon Pixel detectors are at the core of the current and planned upgrade of the ATLAS detector. As the detector in closest proximity to the interaction point, these detectors will be subjected to a significant amount of radiation over their lifetime: prior to the HL-LHC, the innermost layers will receive a fluence in excess of 10^15 neq/cm2 and the HL-LHC detector upgrades must cope with an order of magnitude higher fluence integrated over their lifetimes. This poster presents the details of a new digitization model that includes radiation damage effects to the 3D Pixel sensors for the ATLAS Detector.

  4. Measurements of integrated and differential cross sections for isolated photon pair production in 8 TeV pp collisions at ATLAS

    CERN Document Server

    Saimpert, Matthias; The ATLAS collaboration

    2017-01-01

    A measurement of the production cross section for two isolated photons in proton--proton collisions at a center-of-mass energy of $\\sqrt{s}=8~\\mathrm{TeV}$ is presented. The results are based on an integrated luminosity of 20.2 fb$^{-1}$ recorded by the ATLAS detector at the Large Hadron Collider. The measurement considers photons with pseudorapidities satisfying $|\\eta^{\\gamma}|40~\\mathrm{GeV}$ and $E_{\\mathrm{T,2}}^{\\gamma}>30~\\mathrm{GeV}$ for the two leading photons ordered in transverse energy produced in the interaction. The background due to hadronic jets and electrons is subtracted using data-driven techniques. The fiducial cross sections are corrected for detector effects and measured differentially as a function of six kinematic observables. The measured cross section integrated within the fiducial volume is $16.8 \\pm 0.8~\\mathrm{pb}$. The data are compared to fixed-order QCD calculations at next-to-leading-order and next-to-next-to-leading-order accuracy as well as next-to-leading-order computation...

  5. Physics Prospects at the HL-LHC with ATLAS

    CERN Document Server

    Duncan, Anna Kathryn; The ATLAS collaboration

    2017-01-01

    The High-Luminosity LHC aims to provide a total integrated luminosity of 3000 fb-1 from p-p collisions at $\\sqrt{s}$ = 14 TeV over the course of $\\sim$ 10 years, reaching instantaneous luminosities of up to L = 7.5 $\\times$ 1034cm$^{-2}$s$^{-1}$, corresponding to an average ($\\mu$) of 200 inelastic p-p collisions per bunch crossing. The upgraded ATLAS detector must be able to cope well with increased occupancies and data rates. The performance of the upgrade has been estimated in full simulation studies, assuming expected HL-LHC conditions and a detector configuration intended to maximise physics performance and discovery potential at the HL-LHC. The performance is expected to be similar to what we have now. Simulation studies have been carried out to evaluate the prospects of various benchmark physics analyses to be performed using the upgraded ATLAS detector with the full HL-LHC dataset.

  6. Top Quark Properties Measurements with the ATLAS Experiment

    International Nuclear Information System (INIS)

    Quijada, J A Murillo

    2016-01-01

    Results on recent measurements of top quark properties with the ATLAS experiment at the European Laboratory, CERN, are shown. The measurements are performed using the full data set recorded during the LHC Run-I. The full data set consists of a collected integrated luminosities ∫Tdt of 4.6 fb -1 recorded at a proton-proton collision energy of √ s = 7 TeV and 20.3 fb -1 collected at 8 TeV. The mentioned top quark properties include: spin correlation, charge asymmetry, W-boson polarization, color flow, top mass and top width in events with a top and anti-top quark pair ( tt ). An introduction to the LHC and the ATLAS detector is included and latest main results from this experiment. The contents include the current world benchmark results for the different properties and plans for future measurements during the ongoing LHC Run-II. (paper)

  7. Development and Implementation of a Corriedale Ovine Brain Atlas for Use in Atlas-Based Segmentation.

    Directory of Open Access Journals (Sweden)

    Kishan Andre Liyanage

    Full Text Available Segmentation is the process of partitioning an image into subdivisions and can be applied to medical images to isolate anatomical or pathological areas for further analysis. This process can be done manually or automated by the use of image processing computer packages. Atlas-based segmentation automates this process by the use of a pre-labelled template and a registration algorithm. We developed an ovine brain atlas that can be used as a model for neurological conditions such as Parkinson's disease and focal epilepsy. 17 female Corriedale ovine brains were imaged in-vivo in a 1.5T (low-resolution MRI scanner. 13 of the low-resolution images were combined using a template construction algorithm to form a low-resolution template. The template was labelled to form an atlas and tested by comparing manual with atlas-based segmentations against the remaining four low-resolution images. The comparisons were in the form of similarity metrics used in previous segmentation research. Dice Similarity Coefficients were utilised to determine the degree of overlap between eight independent, manual and atlas-based segmentations, with values ranging from 0 (no overlap to 1 (complete overlap. For 7 of these 8 segmented areas, we achieved a Dice Similarity Coefficient of 0.5-0.8. The amygdala was difficult to segment due to its variable location and similar intensity to surrounding tissues resulting in Dice Coefficients of 0.0-0.2. We developed a low resolution ovine brain atlas with eight clinically relevant areas labelled. This brain atlas performed comparably to prior human atlases described in the literature and to intra-observer error providing an atlas that can be used to guide further research using ovine brains as a model and is hosted online for public access.

  8. Integration of the monitoring and offline analysis systems of the ATLAS hadronic calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Maidantchik, Carmen; Balabram, Luiz Eduardo; Gomes, Andressa Sivollela; Ferreira, Fernando G.; Marroquim, Fernando [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil)

    2011-07-01

    Full text: During the ATLAS detector operation, collaborators perform innumerous analysis related to the calibration in order to acquire detailed information about the hadronic calorimeter (TileCal) equipment. Through the analysis, it is possible to detect faults that would affect data acquisition, which are of physics interest. Some defects examples are: saturation of reading channels, problems in the acquired signal digitization and high signal-to-noise ratio (SNR). Since the commissioning period, members of the collaboration between CERN and UFRJ developed Web systems to support the hard task of monitoring the TileCal equipment. The Tile Commissioning Web System (TCWS) integrates different applications, each one presenting part of the commissioning process. The Web Interface for Shifters (WIS) displays the most recent calibration runs and assists the monitoring of the modules operation. The TileComm Analysis (TCA) allows access to histograms that represents the status of modules and corresponding channels functioning. The Timeline provides the history of the calibration rounds and the state of all modules in chronological order. The Data Quality Monitoring (DQM) contains the status of the histograms, modules and channels. The E-log stores and displays all reports about calibrations. Web Monitoring and Calibration System (MCWS) allows the visualization of the most recent channel status of each module. DCS (Detector Control System) Web System monitors the operation of modules power supply. After the ATLAS operation has started the number of equipment calibrations increased significantly, which has prompted the development of a system that would display all previous information through a centralized way. The Dashboard allows the collaborator to easily access the latest runs or to search for specific ones. After selecting a run, it is possible to check the status of each barrel module through a schematic figure, to view the 10 latest status of a certain module, and

  9. Integration of the monitoring and offline analysis systems of the ATLAS hadronic calorimeter

    International Nuclear Information System (INIS)

    Maidantchik, Carmen; Balabram, Luiz Eduardo; Gomes, Andressa Sivollela; Ferreira, Fernando G.; Marroquim, Fernando

    2011-01-01

    Full text: During the ATLAS detector operation, collaborators perform innumerous analysis related to the calibration in order to acquire detailed information about the hadronic calorimeter (TileCal) equipment. Through the analysis, it is possible to detect faults that would affect data acquisition, which are of physics interest. Some defects examples are: saturation of reading channels, problems in the acquired signal digitization and high signal-to-noise ratio (SNR). Since the commissioning period, members of the collaboration between CERN and UFRJ developed Web systems to support the hard task of monitoring the TileCal equipment. The Tile Commissioning Web System (TCWS) integrates different applications, each one presenting part of the commissioning process. The Web Interface for Shifters (WIS) displays the most recent calibration runs and assists the monitoring of the modules operation. The TileComm Analysis (TCA) allows access to histograms that represents the status of modules and corresponding channels functioning. The Timeline provides the history of the calibration rounds and the state of all modules in chronological order. The Data Quality Monitoring (DQM) contains the status of the histograms, modules and channels. The E-log stores and displays all reports about calibrations. Web Monitoring and Calibration System (MCWS) allows the visualization of the most recent channel status of each module. DCS (Detector Control System) Web System monitors the operation of modules power supply. After the ATLAS operation has started the number of equipment calibrations increased significantly, which has prompted the development of a system that would display all previous information through a centralized way. The Dashboard allows the collaborator to easily access the latest runs or to search for specific ones. After selecting a run, it is possible to check the status of each barrel module through a schematic figure, to view the 10 latest status of a certain module, and

  10. Taking ATLAS to new heights

    CERN Document Server

    Abha Eli Phoboo, ATLAS experiment

    2013-01-01

    Earlier this month, 51 members of the ATLAS collaboration trekked up to the highest peak in the Atlas Mountains, Mt. Toubkal (4,167m), in North Africa.    The physicists were in Marrakech, Morocco, attending the ATLAS Overview Week (7 - 11 October), which was held for the first time on the African continent. Around 300 members of the collaboration met to discuss the status of the LS1 upgrades and plans for the next run of the LHC. Besides the trek, 42 ATLAS members explored the Saharan sand dunes of Morocco on camels.  Photos courtesy of Patrick Jussel.

  11. Development of a picosecond time-of-flight system in the ATLAS experiment

    International Nuclear Information System (INIS)

    Grabas, Herve

    2013-01-01

    In this thesis, we present a study of the sensitivity to Beyond Standard Model physics brought by the design and installation of picosecond time-of-flight detectors in the forward region of the ATLAS experiment at the LHC. The first part of the thesis present a study of the sensitivity to the quartic gauge anomalous coupling between the photon and the W boson, using exclusive WW pair production in ATLAS. The event selection is built considering the semi-leptonic decay of WW pair and the presence of the AFP detector in ATLAS. The second part gives a description of large area picosecond photo-detectors design and time reconstruction algorithms with a special care given to signal sampling and processing for precision timing. The third part presents the design of SamPic: a custom picosecond readout integrated circuit. At the end, its first results are reported, and in particular a world-class 5 ps timing precision in measuring the delay between two fast pulses. (author) [fr

  12. Steady-State Calculation of the ATLAS Test Facility Using the SPACE Code

    International Nuclear Information System (INIS)

    Kim, Hyoung Tae; Choi, Ki Yong; Kim, Kyung Doo

    2011-01-01

    The Korean nuclear industry is developing a thermalhydraulic analysis code for safety analysis of pressurized water reactors (PWRs). The new code is called the Safety and Performance Analysis Code for Nuclear Power Plants (SPACE). Several research and industrial organizations including KAERI (Korea Atomic Energy Research Institute) are participating in the collaboration for the development of the SPACE code. One of the main tasks of KAERI is to carry out separate effect tests (SET) and integral effect tests (IET) for code verification and validation (V and V). The IET has been performed with ATLAS (Advanced Thermalhydraulic Test Loop for Accident Simulation) based on the design features of the APR1400 (Advanced Power Reactor of 1400MWe). In the present work the SPACE code input-deck for ATLAS is developed and used for simulation of the steady-state conditions of ATLAS as a preliminary work for IET V and V of the SPACE code

  13. Entre estupros e convenções narrativas: os Cartórios Policiais e seus papéis numa Delegacia de Defesa da Mulher (DDM

    Directory of Open Access Journals (Sweden)

    Larissa Nadai

    Full Text Available Resumo Este artigo tem por objetivo colocar em evidência as convenções narrativas que constituem os documentos oficiais produzidos pela Delegacia de Defesa da Mulher (DDM de Campinas, em casos de estupro e ato libidinoso, entre os anos de 2004 e 2005. Levando em consideração, a “gramática” e os “léxicos” produzidos pela polícia civil, gostaria de refletir sobre as inflexões narrativas que são postas em prática por essa corporação quando escrivãs e delegadas, por meio de seu trabalho rotineiro de escrita, forjam termos, produzem encadeamentos narrativos, sequências e imagens textuais. Tomando como cenário a espacialidade e os barulhos e silêncios impostos aos expedientes de trabalho dessa repartição policial, busco, também, colocar em evidência as expertises, estratégias e táticas mobilizadas por essas profissionais diante dos dilemas cotidianos de escuta/escrita enfrentados.

  14. The star-formation histories of early-type galaxies from ATLAS3D

    NARCIS (Netherlands)

    McDermid, Richard M.; Alatalo, Katherine; Blitz, Leo; Bois, Maxime; Bournaud, Frédéric; Bureau, Martin; Cappellari, Michele; Crocker, Alison F.; Davies, Roger L.; Davis, Tim A.; de Zeeuw, P. T.; Duc, Pierre-Alain; Emsellem, Eric; Khochfar, Sadegh; Krajnović, Davor; Kuntschner, Harald; Lablanche, Pierre-Yves; Morganti, Rafaella; Naab, Thorsten; Oosterloo, Tom; Sarzi, Marc; Scott, Nic; Serra, Paolo; Weijmans, Anne-Marie; Young, Lisa M.

    We present an exploration of the integrated stellar populations of early-type galaxies (ETGs) from the ATLAS3D survey. We use two approaches: firstly the application of line-indices interpreted through single stellar population (SSP) models, which provide a single value of age, metallicity and

  15. Brief retrospection on Hungarian school atlases

    Science.gov (United States)

    Klinghammer, István; Jesús Reyes Nuñez, José

    2018-05-01

    The first part of this article is dedicated to the history of Hungarian school atlases to the end of the 1st World War. Although the first maps included in a Hungarian textbook were probably made in 1751, the publication of atlases for schools is dated almost 50 years later, when professor Ézsáiás Budai created his "New School Atlas for elementary pupils" in 1800. This was followed by a long period of 90 years, when the school atlases were mostly translations and adaptations of foreign atlases, the majority of which were made in German-speaking countries. In those years, a school atlas made by a Hungarian astronomer, Antal Vállas, should be highlighted as a prominent independent piece of work. In 1890, a talented cartographer, Manó Kogutowicz founded the Hungarian Geographical Institute, which was the institution responsible for producing school atlases for the different types of schools in Hungary. The professional quality of the school atlases published by his institute was also recognized beyond the Hungarian borders by prizes won in international exhibitions. Kogutowicz laid the foundations of the current Hungarian school cartography: this statement is confirmed in the second part of this article, when three of his school atlases are presented in more detail to give examples of how the pupils were introduced to the basic cartographic and astronomic concepts as well as how different innovative solutions were used on the maps.

  16. Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00226583; The ATLAS collaboration; Filipčič, Andrej; Guan, Wen; Tsulaia, Vakhtang; Walker, Rodney; Wenaus, Torre

    2017-01-01

    With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from the resources that comprise the Grid computing of most experiments, therefore exploiting these resources requires a change in strategy for the experiment. The resources may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The ARC CE with its non-intrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the Event Service primarily to address the issue of jobs that can be terminated at any point when opportunistic resources are needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in...

  17. Exploiting Opportunistic Resources for ATLAS with ARC CE and the Event Service

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2016-01-01

    With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from the resources that comprise the Grid computing of most experiments, therefore exploiting these resources requires a change in strategy for the experiment. The resources may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The ARC CE with its non-intrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the Event Service primarily to address the issue of jobs that can be terminated at any point when opportunistic resources are needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in...

  18. The Layout and Performance of the Phase-II upgrade of the tracking detector of the ATLAS experiment

    CERN Document Server

    Ai, Xiaocong; The ATLAS collaboration

    2017-01-01

    HL-LHC will deliver about 3000 fb-1 of integrated luminosity in over 10 year. This will present an extremely challenging environment to the ATLAS experiment, well beyond that for which it was designed. In ATLAS Phase II upgrade, the Inner Detector will be replace by a new all-silicon Inner Tracker to maintain tracking performance in this high-occupancy environment and to cope with the increase of approximately a factor of ten in the integrated radiation dose. The ITk Detector layout is designed to meet the requirement for identifying charged particles with high efficiency and measuring their properties with high precision in the denser environment. The Layout and performance of the ITk is presented.

  19. ATLAS Maintenance and Operation management system

    CERN Document Server

    Copy, B

    2007-01-01

    The maintenance and operation of the ATLAS detector will involve thousands of contributors from 170 physics institutes. Planning and coordinating the action of ATLAS members, ensuring their expertise is properly leveraged and that no parts of the detector are understaffed or overstaffed will be a challenging task. The ATLAS Maintenance and Operation application (referred to as Operation Task Planner inside the ATLAS experiment) offers a fluent web based interface that combines the flexibility and comfort of a desktop application, intuitive data visualization and navigation techniques, with a lightweight service oriented architecture. We will review the application, its usage within the ATLAS experiment, its underlying design and implementation.

  20. Taus at ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Demers, Sarah M. [Yale Univ., New Haven, CT (United States). Dept. of Physics

    2017-12-06

    The grant "Taus at ATLAS" supported the group of Sarah Demers at Yale University over a period of 8.5 months, bridging the time between her Early Career Award and her inclusion on Yale's grant cycle within the Department of Energy's Office of Science. The work supported the functioning of the ATLAS Experiment at CERN's Large Hadron Collider and the analysis of ATLAS data. The work included searching for the Higgs Boson in a particular mode of its production (with a W or Z boson) and decay (to a pair of tau leptons.) This was part of a broad program of characterizing the Higgs boson as we try to understand this recently discovered particle, and whether or not it matches our expectations within the current standard model of particle physics. In addition, group members worked with simulation to understand the physics reach of planned upgrades to the ATLAS experiment. Supported group members include postdoctoral researcher Lotte Thomsen and graduate student Mariel Pettee.

  1. Searches for electroweak production of supersymmetric gauginos and sleptons with the ATLAS detector

    CERN Document Server

    Kourkoumeli-Charalampidi, Athina; The ATLAS collaboration

    2017-01-01

    The latest results of the electroweak production of Supersymmetric particles is presented. The searches are based on the integrated luminosity of 36.1 fb^{-1} of pp collisions collected at \\sqrt{s} = 13 TeV by the ATLAS experiment at the LHC.

  2. Soft QCD at CMS and ATLAS

    CERN Document Server

    Starovoitov, Pavel; The ATLAS collaboration

    2018-01-01

    A short overview of the recent soft QCD results from the ATLAS and CMS collaborations is presented. The inelastic cross section measurement by CMS at 13 TeV is summarised. The contribution of the diffractive processes to the very forward photon spectra studied by ATLAS and LHCf is discussed. The ATLAS measurements of the exclusive two-photon production of the muon pairs is presented and compared to the previous ATLAS and CMS results.

  3. ATLAS B-physics potential

    CERN Document Server

    Smizanska, M

    2001-01-01

    Studies since 1993 have demonstrated the ability of ATLAS to pursue a wide B physics program. This document presents the latest performance studies with special stress on lepton identification. B-decays containing several leptons in ATLAS statistically dominate the high- precision measurements. We present new results on physics simulations of CP violation measurements in the B/sub s//sup 0/ to J/ psi phi decay and on a novel ATLAS programme on beauty production in central proton-proton collisions at the LHC. (7 refs).

  4. ATLAS. LHC experiments

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    In Greek mythology, Atlas was a Titan who had to hold up the heavens with his hands as a punishment for having taken part in a revolt against the Olympians. For LHC, the ATLAS detector will also have an onerous physics burden to bear, but this is seen as a golden opportunity rather than a punishment. The major physics goal of CERN's LHC proton-proton collider is the quest for the long-awaited£higgs' mechanism which drives the spontaneous symmetry breaking of the electroweak Standard Model picture. The large ATLAS collaboration proposes a large general-purpose detector to exploit the full discovery potential of LHC's proton collisions. LHC will provide proton-proton collision luminosities at the aweinspiring level of 1034 cm2 s~1, with initial running in at 1033. The ATLAS philosophy is to handle as many signatures as possible at all luminosity levels, with the initial running providing more complex possibilities. The ATLAS concept was first presented as a Letter of Intent to the LHC Committee in November 1992. Following initial presentations at the Evian meeting (Towards the LHC Experimental Programme') in March of that year, two ideas for generalpurpose detectors, the ASCOT and EAGLE schemes, merged, with Friedrich Dydak (MPI Munich) and Peter Jenni (CERN) as ATLAS cospokesmen. Since the initial Letter of Intent presentation, the ATLAS design has been optimized and developed, guided by physics performance studies and the LHC-oriented detector R&D programme (April/May, page 3). The overall detector concept is characterized by an inner superconducting solenoid (for inner tracking) and large superconducting air-core toroids outside the calorimetry. This solution avoids constraining the calorimetry while providing a high resolution, large acceptance and robust detector. The outer magnet will extend over a length of 26 metres, with an outer diameter of almost 20 metres. The total weight of the detector is 7,000 tonnes. Fitted with its end

  5. Three-dimensional stereotactic atlas of the extracranial vasculature correlated with the intracranial vasculature, cranial nerves, skull and muscles.

    Science.gov (United States)

    Nowinski, Wieslaw L; Shoon Let Thaung, Thant; Choon Chua, Beng; Hnin Wut Yi, Su; Yang, Yili; Urbanik, Andrzej

    2015-04-01

    Our objective was to construct a 3D, interactive, and reference atlas of the extracranial vasculature spatially correlated with the intracranial blood vessels, cranial nerves, skull, glands, and head muscles.The atlas has been constructed from multiple 3T and 7T magnetic resonance angiogram (MRA) brain scans, and 3T phase contrast and inflow MRA neck scans of the same specimen in the following steps: vessel extraction from the scans, building 3D tubular models of the vessels, spatial registration of the extra- and intracranial vessels, vessel editing, vessel naming and color-coding, vessel simplification, and atlas validation.This new atlas contains 48 names of the extracranial vessels (25 arterial and 23 venous) and it has been integrated with the existing brain atlas.The atlas is valuable for medical students and residents to easily get familiarized with the extracranial vasculature with a few clicks; is useful for educators to prepare teaching materials; and potentially can serve as a reference in the diagnosis of vascular disease and treatment, including craniomaxillofacial surgeries and radiologic interventions of the face and neck. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  6. ATLAS Grid Workflow Performance Optimization

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration

    2018-01-01

    The CERN ATLAS experiment grid workflow system manages routinely 250 to 500 thousand concurrently running production and analysis jobs to process simulation and detector data. In total more than 300 PB of data is distributed over more than 150 sites in the WLCG. At this scale small improvements in the software and computing performance and workflows can lead to significant resource usage gains. ATLAS is reviewing together with CERN IT experts several typical simulation and data processing workloads for potential performance improvements in terms of memory and CPU usage, disk and network I/O. All ATLAS production and analysis grid jobs are instrumented to collect many performance metrics for detailed statistical studies using modern data analytics tools like ElasticSearch and Kibana. This presentation will review and explain the performance gains of several ATLAS simulation and data processing workflows and present analytics studies of the ATLAS grid workflows.

  7. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    International Nuclear Information System (INIS)

    Elmsheuser, Johannes; Legger, Federica; Llamas, Ramón Medrano; Sciabà, Andrea; García, Mario Úbeda; Ster, Daniel van der; Sciacca, Gianfranco

    2012-01-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion policies. A study of the historical test results for ATLAS, CMS and LHCb will be presented, including comparisons between the experiments’ grid availabilities and a search for site-based or temporal failure correlations. Finally, we will look to future plans that will allow users to gain new insights into the test results; these include developments to allow increased testing concurrency, increased scale in the number of metrics recorded per test job (up to hundreds), and increased scale in the historical job information (up to many millions of jobs per VO).

  8. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    Science.gov (United States)

    Elmsheuser, Johannes; Medrano Llamas, Ramón; Legger, Federica; Sciabà, Andrea; Sciacca, Gianfranco; Úbeda García, Mario; van der Ster, Daniel

    2012-12-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion policies. A study of the historical test results for ATLAS, CMS and LHCb will be presented, including comparisons between the experiments’ grid availabilities and a search for site-based or temporal failure correlations. Finally, we will look to future plans that will allow users to gain new insights into the test results; these include developments to allow increased testing concurrency, increased scale in the number of metrics recorded per test job (up to hundreds), and increased scale in the historical job information (up to many millions of jobs per VO).

  9. First experience and adaptation of existing tools to ATLAS distributed analysis

    International Nuclear Information System (INIS)

    De La Hoz, S.G.; Ruiz, L.M.; Liko, D.

    2008-01-01

    The ATLAS production system has been successfully used to run production of simulation data at an unprecedented scale in ATLAS. Up to 10000 jobs were processed on about 100 sites in one day. The experiences obtained operating the system on several grid flavours was essential to perform a user analysis using grid resources. First tests of the distributed analysis system were then performed. In the preparation phase data was registered in the LHC file catalog (LFC) and replicated in external sites. For the main test, few resources were used. All these tests are only a first step towards the validation of the computing model. The ATLAS management computing board decided to integrate the collaboration efforts in distributed analysis in only one project, GANGA. The goal is to test the reconstruction and analysis software in a large scale Data production using grid flavors in several sites. GANGA allows trivial switching between running test jobs on a local batch system and running large-scale analyses on the grid; it provides job splitting and merging, and includes automated job monitoring and output retrieval. (orig.)

  10. Increasing Drought Sensitivity and Decline of Atlas Cedar (Cedrus atlantica in the Moroccan Middle Atlas Forests

    Directory of Open Access Journals (Sweden)

    Jesús Julio Camarero

    2011-09-01

    Full Text Available An understanding of the interactions between climate change and forest structure on tree growth are needed for decision making in forest conservation and management. In this paper, we investigated the relative contribution of tree features and stand structure on Atlas cedar (Cedrus atlantica radial growth in forests that have experienced heavy grazing and logging in the past. Dendrochronological methods were applied to quantify patterns in basal-area increment and drought sensitivity of Atlas cedar in the Middle Atlas, northern Morocco. We estimated the tree-to-tree competition intensity and quantified the structure in Atlas cedar stands with contrasting tree density, age, and decline symptoms. The relative contribution of tree age and size and stand structure to Atlas cedar growth decline was estimated by variance partitioning using partial-redundancy analyses. Recurrent drought events and temperature increases have been identified from local climate records since the 1970s. We detected consistent growth declines and increased drought sensitivity in Atlas cedar across all sites since the early 1980s. Specifically, we determined that previous growth rates and tree age were the strongest tree features, while Quercus rotundifolia basal area was the strongest stand structure measure related to Atlas cedar decline. As a result, we suggest that Atlas cedar forests that have experienced severe drought in combination with grazing and logging may be in the process of shifting dominance toward more drought-tolerant species such as Q. rotundifolia.

  11. Silicon Strip Detectors for ATLAS at the HL-LHC Upgrade

    CERN Document Server

    Hara, K; The ATLAS collaboration

    2012-01-01

    The present ATLAS silicon strip (SCT) and transition radiation (TRT) trackers will be replaced with new silicon strip detectors, as part of the Inner Tracker System (ITK), for the Phase-2 upgrade of the Large Hadron Collider, HL-LHC. We have carried out intensive R&D programs to establish radiation harder strip detectors that can survive in a radiation level up to 3000 fb-1 of integrated luminosity based on n+-on-p microstrip detector. We describe main specifications for this year’s sensor fabrication, followed by a description of possible module integration schema

  12. ATLAS End-cap Part II

    CERN Multimedia

    2007-01-01

    The epic journey of the ATLAS magnets is drawing to an end. On Thursday 12 July, the second end-cap of the ATLAS toroid magnet was lowered into the cavern of the experiment with the same degree of precision as the first (see Bulletin No. 26/2007). This spectacular descent of the 240-tonne component, is one of the last transport to be completed for ATLAS.

  13. ATLAS experiment : mapping the secrets of the universe

    CERN Multimedia

    ATLAS Outreach

    2010-01-01

    This 4 page color brochure describes ATLAS and the LHC, the ATLAS inner detector, calorimeters, muon spectrometer, magnet system, a short definition of the terms "particles," "dark matter," "mass," "antimatter." It also explains the ATLAS collaboration and provides the ATLAS website address with some images of the detector and the ATLAS collaboration at work.

  14. Mindboggle: Automated brain labeling with multiple atlases

    International Nuclear Information System (INIS)

    Klein, Arno; Mensh, Brett; Ghosh, Satrajit; Tourville, Jason; Hirsch, Joy

    2005-01-01

    To make inferences about brain structures or activity across multiple individuals, one first needs to determine the structural correspondences across their image data. We have recently developed Mindboggle as a fully automated, feature-matching approach to assign anatomical labels to cortical structures and activity in human brain MRI data. Label assignment is based on structural correspondences between labeled atlases and unlabeled image data, where an atlas consists of a set of labels manually assigned to a single brain image. In the present work, we study the influence of using variable numbers of individual atlases to nonlinearly label human brain image data. Each brain image voxel of each of 20 human subjects is assigned a label by each of the remaining 19 atlases using Mindboggle. The most common label is selected and is given a confidence rating based on the number of atlases that assigned that label. The automatically assigned labels for each subject brain are compared with the manual labels for that subject (its atlas). Unlike recent approaches that transform subject data to a labeled, probabilistic atlas space (constructed from a database of atlases), Mindboggle labels a subject by each atlas in a database independently. When Mindboggle labels a human subject's brain image with at least four atlases, the resulting label agreement with coregistered manual labels is significantly higher than when only a single atlas is used. Different numbers of atlases provide significantly higher label agreements for individual brain regions. Increasing the number of reference brains used to automatically label a human subject brain improves labeling accuracy with respect to manually assigned labels. Mindboggle software can provide confidence measures for labels based on probabilistic assignment of labels and could be applied to large databases of brain images

  15. Modeling Radiation Damage to Pixel Sensors in the ATLAS Detector

    CERN Document Server

    Nachman, Benjamin Philip; The ATLAS collaboration

    2017-01-01

    Silicon Pixel detectors are at the core of the current and planned upgrade of the ATLAS detector. As the detector in closest proximity to the interaction point, these detectors will be subjected to a significant amount of radiation over their lifetime: prior to the HL-LHC, the innermost layers will receive a fluence in excess of $10^{15}$ 1 MeV $n_\\mathrm{eq}/\\mathrm{cm}^2$ and the HL-LHC detector upgrades must cope with an order of magnitude higher fluence integrated over their lifetimes. This talk presents a digitization model that includes radiation damage effects to the ATLAS Pixel sensors for the first time. After a thorough description of the setup, predictions for basic Pixel cluster properties are presented alongside first validation studies with Run 2 collision data.

  16. On the accuracy and reproducibility of a novel probabilistic atlas-based generation for calculation of head attenuation maps on integrated PET/MR scanners.

    Science.gov (United States)

    Chen, Kevin T; Izquierdo-Garcia, David; Poynton, Clare B; Chonde, Daniel B; Catana, Ciprian

    2017-03-01

    To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps ("μ-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach.

  17. On the accuracy and reproducibility of a novel probabilistic atlas-based generation for calculation of head attenuation maps on integrated PET/MR scanners

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Kevin T. [Massachusetts General Hospital and Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Charlestown, MA (United States); Massachusetts Institute of Technology, Division of Health Sciences and Technology, Cambridge, MA (United States); Izquierdo-Garcia, David; Catana, Ciprian [Massachusetts General Hospital and Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Charlestown, MA (United States); Poynton, Clare B. [Massachusetts General Hospital and Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Charlestown, MA (United States); Massachusetts General Hospital, Department of Psychiatry, Boston, MA (United States); University of California, San Francisco, Department of Radiology and Biomedical Imaging, San Francisco, CA (United States); Chonde, Daniel B. [Massachusetts General Hospital and Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Charlestown, MA (United States); Harvard University, Program in Biophysics, Cambridge, MA (United States)

    2017-03-15

    To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps (''μ-maps'') were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map (''PAC-map'') generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach. (orig.)

  18. ATLAS Visitors Centre

    CERN Multimedia

    claudia Marcelloni

    2009-01-01

    ATLAS Visitors Centre has opened its shiny new doors to the public. Officially launched on Monday February 23rd, 2009, the permanent exhibition at Point 1 was conceived as a tour resource for ATLAS guides, and as a way to preserve the public’s opportunity to get a close-up look at the experiment in action when the cavern is sealed.

  19. ATLAS rewards industry

    CERN Document Server

    Maximilien Brice

    2006-01-01

    For contributing vital pieces to the ATLAS puzzle, three industries were recognized on Friday 5 May during a supplier awards ceremony. After a welcome and overview of the ATLAS experiment by spokesperson Peter Jenni, CERN Secretary-General Maximilian Metzger stressed the importance of industry to CERN's scientific goals. Picture 30 : representatives of the three award-wining companies after the ceremony

  20. The ATLAS Experiment Laboratory - Overview

    International Nuclear Information System (INIS)

    Malecki, P.

    1999-01-01

    Full text: ATLAS Experiment Laboratory has been created by physicists and engineers preparing a research programme and detector for the LHC collider. This group is greatly supported by members of other Departments taking also part (often full time) in the ATLAS project. These are: J. Blocki, J. Godlewski, Z. Hajduk, P. Kapusta, B. Kisielewski, W. Ostrowicz, E. Richter-Was, and M. Turala. Our ATLAS Laboratory realizes its programme in very close collaboration with the Faculty of Physics and Nuclear Technology of the University of Mining and Metallurgy. ATLAS, A Toroidal LHC ApparatuS Collaboration groups about 1700 experimentalists from about 150 research institutes. This apparatus, a huge system of many detectors, which are technologically very advanced, is going to be ready by 2005. With the start of the 2 x 7 TeV LHC collider ATLAS and CMS (the sister experiment at LHC) will begin their fascinating research programme at beam energies and intensities which have never been exploited. (author)

  1. ATLAS Award for Difficult Task

    CERN Multimedia

    2004-01-01

    Two Russian companies were honoured with an ATLAS Award, for supply of the ATLAS Inner Detector barrel support structure elements, last week. On 23 March the Russian company ORPE Technologiya and its subcontractor, RSP Khrunitchev, were jointly presented with an ATLAS Supplier Award. Since 1998, ORPE Technologiya has been actively involved in the development of the carbon-fibre reinforced plastic elements of the ATLAS Inner Detector barrel support structure. After three years of joint research and development, CERN and ORPE Technologiya launched the manufacturing contract. It had a tight delivery schedule and very demanding specifications in terms of mechanical tolerance and stability. The contract was successfully completed with the arrival of the last element of the structure at CERN on 8 January 2004. The delivery of this key component of the Inner Detector deserves an ATLAS Award given the difficulty of manufacturing the end-frames, which very few companies in the world would have been able to do at an ...

  2. ATLAS & Google - The Data Ocean Project

    CERN Document Server

    Lassnig, Mario; The ATLAS collaboration

    2018-01-01

    With the LHC High Luminosity upgrade the workload and data management systems are facing new major challenges. To address those challenges ATLAS and Google agreed to cooperate on a project to connect Google Cloud Storage and Compute Engine to the ATLAS computing environment. The idea is to allow ATLAS to explore the use of different computing models, to allow ATLAS user analysis to benefit from the Google infrastructure, and to give Google real science use cases to improve their cloud platform. Making the output of a distributed analysis from the grid quickly available to the analyst is a difficult problem. Redirecting the analysis output to Google Cloud Storage can provide an alternative, faster solution for the analyst. First, Google's Cloud Storage will be connected to the ATLAS Data Management System Rucio. The second part aims to let jobs run on Google Compute Engine, accessing data from either ATLAS storage or Google Cloud Storage. The third part involves Google implementing a global redirection between...

  3. The ATLAS hadronic tau trigger

    International Nuclear Information System (INIS)

    Shamim, Mansoora

    2012-01-01

    The extensive tau physics programs of the ATLAS experiment relies heavily on trigger to select hadronic decays of tau lepton. Such a trigger is implemented in ATLAS to efficiently collect signal events, while keeping the rate of multi-jet background within the allowed bandwidth. This contribution summarizes the performance of the ATLAS hadronic tau trigger system during 2011 data taking period and improvements implemented for the 2012 data collection.

  4. ATLAS OF EUROPEAN VALUES

    NARCIS (Netherlands)

    M Ed Uwe Krause

    2008-01-01

    Uwe Krause: Atlas of Eurpean Values De Atlas of European Values is een samenwerkingsproject met bijbehorende website van de Universiteit van Tilburg en Fontys Lerarenopleiding in Tilburg, waarbij de wetenschappelijke data van de European Values Study (EVS) voor het onderwijs toegankelijk worden

  5. ATLAS brochure (Italian version)

    CERN Multimedia

    Lefevre, C

    2010-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  6. ATLAS brochure (French version)

    CERN Multimedia

    Lefevre, C

    2012-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  7. ATLAS brochure (German version)

    CERN Multimedia

    Lefevre, C

    2012-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  8. ATLAS brochure (Danish version)

    CERN Multimedia

    Lefevre, C

    2010-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  9. Studying the Electroweak Sector with the ATLAS Detector

    CERN Document Server

    Spalla, Margherita; The ATLAS collaboration

    2018-01-01

    (as received from the Speaker Committee. W mass removed from the presentation later on, as discussed in separate talk.) The large integrated luminosities that are available at the LHC, allow to test the gauge structure of the electroweak sector of the Standard Model to highest precision. In this talk, we review the latest results of the ATLAS collaboration involving di-boson and multiboson final states, the electroweak production of vector bosons as well as their constraints of effective field theory operators. Another approach to test the consistency of the electroweak sector is via precision measurements. ATLAS has published a first high precision measurement of the W boson mass, a first measurement of the tau-polarization in Z events as well as a three dimensional cross-section measurement of the Drell-Yan process. The latter allows for the extraction of the forward-backward asymmetry that can be interpreted as a measurement of the weak mixing angle. These results will be presented and discussed.

  10. Physics prospects at the HL-LHC with ATLAS

    CERN Document Server

    Duncan, Anna Kathryn

    2017-01-01

    The High-Luminosity LHC aims to provide a total integrated luminosity of 3000 fb$^{-1}$ from proton-proton collisions at $\\sqrt{s}$ = 14 TeV over the course of $\\sim$ 10 years, reaching instantaneous luminosities of up to $L = 7.5 \\times 10^{34} cm^{-2} s^{-1}$, corresponding to an average of 200 inelastic p-p collisions per bunch crossing ($\\mu = 200)$. The upgraded ATLAS detector and trigger system must be able to cope well with increased occupancies and data rates. The performance of the upgrade has been estimated in full simulation studies, assuming expected HL-LHC conditions and a detector configuration intended to maximise physics performance and discovery potential at the HL-LHC, and is expected to be similar to current performance. Fast simulation studies have been carried out to evaluate the prospects of various benchmark physics analyses to be performed using the upgraded ATLAS detector with the full HL-LHC dataset.

  11. Calibration and monitoring of the ATLAS Tile calorimeter

    CERN Document Server

    Boumediene, Djamel Eddine; The ATLAS collaboration

    2017-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs). PMT signals are then digitized at 40~MHz and stored on detector and are only transferred off detector once the first level trigger acceptance has been confirmed. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain, a set of calibration systems is used. The TileCal calibration system comprises Cesium radioactive sources, laser, charge injection elements and an integrator b...

  12. Forward Detectors in ATLAS: ALFA, ZDC and LUCID

    CERN Document Server

    Fabbri, L; The ATLAS collaboration

    2009-01-01

    In order to determine the experimental cross sections for the observed physics processes, an estimation of the absolute luminosity is needed. In fact a careful study of “well known” processes will be one of the first steps of the LHC experiments as it can provide possible signatures of new physics which consist in deviations with respect to the Standard Model (SM) predictions. The methodologies for luminosity monitoring and total cross section estimation at the LHC will be reviewed in this talk along with the dedicated detectors of the ATLAS experiment. ATLAS will make extensive usage of the detectors in the forward region each one with a different task: LUCID (LUminosity measurement using Cherenkov Integrating Detector) is a system of 40 (2 x 20) Cherenkov tubes, surrounding the beam pipe at about 17 m from the interaction region. It will be able to monitor the collision-by-collision luminosity by detecting and counting the number of charged particles coming from the impact point. ALFA (Absolute Luminosi...

  13. The Hatfield SCT lunar atlas photographic atlas for Meade, Celestron, and other SCT telescopes

    CERN Document Server

    2014-01-01

    In a major publishing event for lunar observers, the justly famous Hatfield atlas is updated in even more usable form. This version of Hatfield’s classic atlas solves the problem of mirror images, making identification of left-right reversed imaged lunar features both quick and easy. SCT and Maksutov telescopes – which of course include the best-selling models from Meade and Celestron – reverse the visual image left to right. Thus it is extremely difficult to identify lunar features at the eyepiece of one of the instruments using a conventional Moon atlas, as the human brain does not cope well when trying to compare the real thing with a map that is a mirror image of it. Now this issue has at last been solved.   In this atlas the Moon’s surface is shown at various sun angles, and inset keys show the effects of optical librations. Smaller non-mirrored reference images are also included to make it simple to compare the mirrored SCT plates and maps with those that appear in other atlases. This edition s...

  14. Modeling Radiation Damage Effects in 3D Pixel Digitization for the ATLAS Detector

    CERN Document Server

    Giugliarelli, Gilberto; The ATLAS collaboration

    2017-01-01

    Silicon Pixel detectors are at the core of the current and planned upgrade of the ATLAS detector. As the detector in closest proximity to the interaction point, these detectors will be subjected to a significant amount of radiation over their lifetime: prior to the HL-LHC, the innermost layers will receive a fluence in excess of 10^15 neq/cm2 and the HL-LHC detector upgrades must cope with an order of magnitude higher fluence integrated over their lifetimes. This poster presents the details of a new digitization model that includes radiation damage effects to the 3D Pixel sensors for the ATLAS Detector.

  15. Modeling Radiation Damage Effects in 3D Pixel Digitization for the ATLAS Detector

    CERN Document Server

    Wallangen, Veronica; The ATLAS collaboration

    2017-01-01

    Silicon Pixel detectors are at the core of the current and planned upgrade of the ATLAS detector. As the detector in closest proximity to the interaction point, these detectors will be subjected to a significant amount of radiation over their lifetime: prior to the HL-LHC, the innermost layers will receive a fluence in excess of 10$^{15}$ n$_\\mathrm{eq}$/cm$^2$ and the HL-LHC detector upgrades must cope with an order of magnitude higher fluence integrated over their lifetimes. This work presents the details of a new digitization model that includes radiation damage effects to the 3D Pixel sensors for the ATLAS detector.

  16. Searches for electroweak SUSY with ATLAS at HL-LHC

    CERN Document Server

    Amoroso, Simone; The ATLAS collaboration

    2018-01-01

    The High Luminosity-Large Hadron Collider (HL-LHC) is expected to start in 2026 and to pro- vide an integrated luminosity of 3000 fb$^{−1}$ in ten years, a factor 10 more than what will be collected by 2023. This high statistics will allow ATLAS to improve searches for new physics at the TeV scale. In this talk search prospects for the electroweak production of supersymmetric particles are presented.

  17. Last piece of the puzzle for ATLAS

    CERN Multimedia

    Clare Ryan

    At around 15.40 on Friday 29th February the ATLAS collaboration cracked open the champagne as the second of the small wheels was lowered into the cavern. Each of ATLAS' small wheels are 9.3 metres in diameter and weigh 100 tonnes including the massive shielding elements. They are the final parts of ATLAS' muon spectrometer. The first piece of ATLAS was installed in 2003 and since then many detector elements have journeyed down the 100 metre shaft into the ATLAS underground cavern. This last piece completes this gigantic puzzle.

  18. NATIONAL ATLAS OF THE ARCTIC

    Directory of Open Access Journals (Sweden)

    Nikolay S. Kasimov

    2018-01-01

    Full Text Available The National Atlas of the Arctic is a set of spatio-temporal information about the geographic, ecological, economic, historical-ethnographic, cultural, and social features of theArcticcompiled as a cartographic model of the territory. The Atlas is intended for use in a wide range of scientific, management, economic, defense, educational, and public activities. The state policy of theRussian Federationin the Arctic for the period until 2020 and beyond, states that the Arctic is of strategic importance forRussiain the 21st century. A detailed description of all sections of the Atlas is given. The Atlas can be used as an information-reference and educational resource or as a gift edition.

  19. Control and Data Acquisition System of the ATLAS Facility

    International Nuclear Information System (INIS)

    Choi, Ki-Yong; Kwon, Tae-Soon; Cho, Seok; Park, Hyun-Sik; Baek, Won-Pil; Kim, Jung-Taek

    2007-02-01

    This report describes the control and data acquisition system of an integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation) facility, which recently has been constructed at KAERI (Korea Atomic Energy Research Institute). The control and data acquisition system of the ATLAS is established with the hybrid distributed control system (DCS) by RTP corp. The ARIDES system on a LINUX platform which is provided by BNF Technology Inc. is used for a control software. The IO signals consists of 1995 channels and they are processed at 10Hz. The Human-Machine-Interface (HMI) consists of 43 processing windows and they are classified according to fluid system. All control devices can be controlled by manual, auto, sequence, group, and table control methods. The monitoring system can display the real time trend or historical data of the selected IO signals on LCD monitors in a graphical form. The data logging system can be started or stopped by operator and the logging frequency can be selected among 0.5, 1, 2, 10Hz. The fluid system of the ATLAS facility consists of several systems including a primary system to auxiliary system. Each fluid system has a control similarity to the prototype plant, APR1400/OPR1000

  20. SEARCHES FOR SUPERSYMMETRY IN ATLAS

    CERN Document Server

    Xu, Da; The ATLAS collaboration

    2017-01-01

    A wide range of supersymmetric searches are presented. All searches are based on the proton- proton collision dataset collected by the ATLAS experiment during the 2015 and 2016 (before summer) run with a center-of-mass energy of 13 TeV, corresponding to an integrated lumi- nosity of 36.1 (36.7) fb-1. The searches are categorized into inclusive gluino and squark search, third generation search, electroweak search, prompt RPV search and long-lived par- ticle search. No evidence of new physics is observed. The results are intepreted in various models and expressed in terms of limits on the masses of new particles.