WorldWideScience

Sample records for atlas conditions database

  1. Advanced technologies for scalable ATLAS conditions database access on the grid

    CERN Document Server

    Basset, R; Dimitrov, G; Girone, M; Hawkings, R; Nevski, P; Valassi, A; Vaniachine, A; Viegas, F; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysi...

  2. Advanced technologies for scalable ATLAS conditions database access on the grid

    International Nuclear Information System (INIS)

    Basset, R; Canali, L; Girone, M; Hawkings, R; Valassi, A; Viegas, F; Dimitrov, G; Nevski, P; Vaniachine, A; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  3. Large scale access tests and online interfaces to ATLAS conditions databases

    International Nuclear Information System (INIS)

    Amorim, A; Lopes, L; Pereira, P; Simoes, J; Soloviev, I; Burckhart, D; Schmitt, J V D; Caprini, M; Kolos, S

    2008-01-01

    The access of the ATLAS Trigger and Data Acquisition (TDAQ) system to the ATLAS Conditions Databases sets strong reliability and performance requirements on the database storage and access infrastructures. Several applications were developed to support the integration of Conditions database access with the online services in TDAQ, including the interface to the Information Services (IS) and to the TDAQ Configuration Databases. The information storage requirements were the motivation for the ONline A Synchronous Interface to COOL (ONASIC) from the Information Service (IS) to LCG/COOL databases. ONASIC avoids the possible backpressure from Online Database servers by managing a local cache. In parallel, OKS2COOL was developed to store Configuration Databases into an Offline Database with history record. The DBStressor application was developed to test and stress the access to the Conditions database using the LCG/COOL interface while operating in an integrated way as a TDAQ application. The performance scaling of simultaneous Conditions database read accesses was studied in the context of the ATLAS High Level Trigger large computing farms. A large set of tests were performed involving up to 1000 computing nodes that simultaneously accessed the LCG central database server infrastructure at CERN

  4. The ATLAS conditions database architecture for the Muon spectrometer

    International Nuclear Information System (INIS)

    Verducci, Monica

    2010-01-01

    The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.

  5. The ATLAS conditions database architecture for the Muon spectrometer

    Science.gov (United States)

    Verducci, Monica; ATLAS Muon Collaboration

    2010-04-01

    The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.

  6. The ATLAS conditions database architecture for the Muon spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Verducci, Monica, E-mail: monica.verducci@cern.c [University of Wuerzburg Am Hubland, 97074, Wuerzburg (Germany)

    2010-04-01

    The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.

  7. Utility of collecting metadata to manage a large scale conditions database in ATLAS

    International Nuclear Information System (INIS)

    Gallas, E J; Albrand, S; Borodin, M; Formica, A

    2014-01-01

    The ATLAS Conditions Database, based on the LCG Conditions Database infrastructure, contains a wide variety of information needed in online data taking and offline analysis. The total volume of ATLAS conditions data is in the multi-Terabyte range. Internally, the active data is divided into 65 separate schemas (each with hundreds of underlying tables) according to overall data taking type, detector subsystem, and whether the data is used offline or strictly online. While each schema has a common infrastructure, each schema's data is entirely independent of other schemas, except at the highest level, where sets of conditions from each subsystem are tagged globally for ATLAS event data reconstruction and reprocessing. The partitioned nature of the conditions infrastructure works well for most purposes, but metadata about each schema is problematic to collect in global tools from such a system because it is only accessible via LCG tools schema by schema. This makes it difficult to get an overview of all schemas, collect interesting and useful descriptive and structural metadata for the overall system, and connect it with other ATLAS systems. This type of global information is needed for time critical data preparation tasks for data processing and has become more critical as the system has grown in size and diversity. Therefore, a new system has been developed to collect metadata for the management of the ATLAS Conditions Database. The structure and implementation of this metadata repository will be described. In addition, we will report its usage since its inception during LHC Run 1, how it has been exploited in the process of conditions data evolution during LSI (the current LHC long shutdown) in preparation for Run 2, and long term plans to incorporate more of its information into future ATLAS Conditions Database tools and the overall ATLAS information infrastructure.

  8. Relational databases for conditions data and event selection in ATLAS

    International Nuclear Information System (INIS)

    Viegas, F; Hawkings, R; Dimitrov, G

    2008-01-01

    The ATLAS experiment at LHC will make extensive use of relational databases in both online and offline contexts, running to O(TBytes) per year. Two of the most challenging applications in terms of data volume and access patterns are conditions data, making use of the LHC conditions database, COOL, and the TAG database, that stores summary event quantities allowing a rapid selection of interesting events. Both of these databases are being replicated to regional computing centres using Oracle Streams technology, in collaboration with the LCG 3D project. Database optimisation, performance tests and first user experience with these applications will be described, together with plans for first LHC data-taking and future prospects

  9. Relational databases for conditions data and event selection in ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Viegas, F; Hawkings, R; Dimitrov, G [CERN, CH-1211 Geneve 23 (Switzerland)

    2008-07-15

    The ATLAS experiment at LHC will make extensive use of relational databases in both online and offline contexts, running to O(TBytes) per year. Two of the most challenging applications in terms of data volume and access patterns are conditions data, making use of the LHC conditions database, COOL, and the TAG database, that stores summary event quantities allowing a rapid selection of interesting events. Both of these databases are being replicated to regional computing centres using Oracle Streams technology, in collaboration with the LCG 3D project. Database optimisation, performance tests and first user experience with these applications will be described, together with plans for first LHC data-taking and future prospects.

  10. Utility of collecting metadata to manage a large scale conditions database in ATLAS

    CERN Document Server

    Gallas, EJ; The ATLAS collaboration; Borodin, M; Formica, A

    2014-01-01

    The ATLAS Conditions Database, based on the LCG Conditions Database infrastructure, contains a wide variety of information needed in online data taking and offline analysis. The total volume of ATLAS conditions data is in the multi-Terabyte range. Internally, the active data is divided into 65 separate schemas (each with hundreds of underlying tables) according to overall data taking type, detector subsystem, and whether the data is used offline or strictly online. While each schema has a common infrastructure, each schema's data is entirely independent of other schemas, except at the highest level, where sets of conditions from each subsystem are tagged globally for ATLAS event data reconstruction and reprocessing. The partitioned nature of the conditions infrastructure works well for most purposes, but metadata about each schema is problematic to collect in global tools from such a system because it is only accessible via LCG tools schema by schema. This makes it difficult to get an overview of all schemas,...

  11. Implementing a modular framework in a conditions database explorer for ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Simoes, J; Amorim, A; Batista, J; Lopes, L; Neves, R; Pereira, P [SIM and FCUL, University of Lisbon, Campo Grande, P-1749-016 Lisbon (Portugal); Kolos, S [University of California, Irvine, California 92697-4575 (United States); Soloviev, I [Petersburg Nuclear Physics Institute, Gatchina, St-Petersburg RU-188350 (Russian Federation)], E-mail: jalmeida@mail.cern.ch, E-mail: Antonio.Amorim@sim.fc.ul.pt

    2008-07-15

    The ATLAS conditions databases will be used to manage information of quite diverse nature and level of complexity. The usage of a relational database manager like Oracle, together with the object managers POOL and OKS developed in-house, poses special difficulties in browsing the available data while understanding its structure in a general way. This is particularly relevant for the database browser projects where it is difficult to link with the class defining libraries generated by general frameworks such as Athena. A modular approach to tackle these problems is presented here. The database infrastructure is under development using the LCG COOL infrastructure, and provides a powerful information sharing gateway upon many different systems. The nature of the stored information ranges from temporal series of simple values up to very complex objects describing the configuration of systems like ATLAS' TDAQ infrastructure, including also associations to large objects managed outside of the database infrastructure. An important example of this architecture is the Online Objects Extended Database BrowsEr (NODE), which is designed to access and display all data, available in the ATLAS Monitoring Data Archive (MDA), including histograms and data tables. To deal with the special nature of the monitoring objects, a plugin from the MDA framework to the Time managed science Instrument Databases (TIDB2) is used. The database browser is extended, in particular to include operations on histograms such as display, overlap, comparisons as well as commenting and local storage.

  12. Development, deployment and operations of ATLAS databases

    International Nuclear Information System (INIS)

    Vaniachine, A. V.; von der Schmitt, J. G.

    2008-01-01

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services

  13. First use of LHC Run 3 Conditions Database infrastructure for auxiliary data files in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081940; The ATLAS collaboration; Barberis, Dario; Gallas, Elizabeth; Rybkin, Grigori; Rinaldi, Lorenzo; Aperio Bella, Ludovica; Buttinger, William

    2017-01-01

    Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed outside the ATLAS Conditions Database infrastructure and related software. For this reason, along with the fact that ADF are effectively read by the software as binary objects, this class of data appears ideal for testing the proposed Run 3 conditions data infrastructure now in development. This paper describes this implementation as well as the lessons learned in exploring and refining the new infrastructure with the potential for deployment during Run 2.

  14. First Use of LHC Run 3 Conditions Database Infrastructure for Auxiliary Data Files in ATLAS

    CERN Document Server

    Aperio Bella, Ludovica; The ATLAS collaboration

    2016-01-01

    Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed outside the ATLAS Conditions Database infrastructure and related software. For this reason, along with the fact that ADF data is effectively read by the software as binary objects, makes this class of data ideal for testing the proposed Run 3 Conditions data infrastructure now in development. This paper will describe this implementation as well as describe the lessons learned in exploring and refining the new infrastructure with the potential for deployment during Run 2.

  15. ATLAS database application enhancements using Oracle 11g

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Blaszczyk, M; Sorokoletov, R

    2012-01-01

    The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, condition data replication to remote sites. Oracle Relational Database Management System (RDBMS) has been addressing the ATLAS database requirements to a great extent for many years. Ten database clusters are currently deployed for the needs of the different applications, divided in production, integration and standby databases. The data volume, complexity and demands from the users are increasing steadily with time. Nowadays more than 20 TB of data are stored in the ATLAS production Oracle databases at CERN (not including the index overhead), but the most impressive number is the hosted 260 database schemas (for the most common case each schema is related to a dedicated client application with its own requirements). At the beginning of 2012 all ATLAS databases at CERN have...

  16. ATLAS database application enhancements using Oracle 11g

    International Nuclear Information System (INIS)

    Dimitrov, G; Canali, L; Blaszczyk, M; Sorokoletov, R

    2012-01-01

    The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, condition data replication to remote sites. Oracle Relational Database Management System (RDBMS) has been addressing the ATLAS database requirements to a great extent for many years. Ten database clusters are currently deployed for the needs of the different applications, divided in production, integration and standby databases. The data volume, complexity and demands from the users are increasing steadily with time. Nowadays more than 20 TB of data are stored in the ATLAS production Oracle databases at CERN (not including the index overhead), but the most impressive number is the hosted 260 database schemes (for the most common case each schema is related to a dedicated client application with its own requirements). At the beginning of 2012 all ATLAS databases at CERN have been upgraded to the newest Oracle version at the time: Oracle 11g Release 2. Oracle 11g come with several key improvements compared to previous database engine versions. In this work we present our evaluation of the most relevant new features of Oracle 11g of interest for ATLAS applications and use cases. Notably we report on the performance and scalability enhancements obtained in production since the Oracle 11g deployment during Q1 2012 and we outline plans for future work in this area.

  17. Scaling up ATLAS Database Release Technology for the LHC Long Run

    International Nuclear Information System (INIS)

    Borodin, M; Nevski, P; Vaniachine, A

    2011-01-01

    To overcome scalability limitations in database access on the Grid, ATLAS introduced the Database Release technology replicating databases in files. For years Database Release technology assured scalable database access for Monte Carlo production on the Grid. Since previous CHEP, Database Release technology was used successfully in ATLAS data reprocessing on the Grid. Frozen Conditions DB snapshot guarantees reproducibility and transactional consistency isolating Grid data processing tasks from continuous conditions updates at the 'live' Oracle server. Database Release technology fully satisfies the requirements of ATLAS data reprocessing and Monte Carlo production. We parallelized the Database Release build workflow to avoid linear dependency of the build time on the length of LHC data-taking period. In recent data reprocessing campaigns the build time was reduced by an order of magnitude thanks to a proven master-worker architecture used in the Google MapReduce. We describe further Database Release optimizations scaling up the technology for the LHC long run.

  18. Multiple brain atlas database and atlas-based neuroimaging system.

    Science.gov (United States)

    Nowinski, W L; Fang, A; Nguyen, B T; Raphel, J K; Jagannathan, L; Raghavan, R; Bryan, R N; Miller, G A

    1997-01-01

    For the purpose of developing multiple, complementary, fully labeled electronic brain atlases and an atlas-based neuroimaging system for analysis, quantification, and real-time manipulation of cerebral structures in two and three dimensions, we have digitized, enhanced, segmented, and labeled the following print brain atlases: Co-Planar Stereotaxic Atlas of the Human Brain by Talairach and Tournoux, Atlas for Stereotaxy of the Human Brain by Schaltenbrand and Wahren, Referentially Oriented Cerebral MRI Anatomy by Talairach and Tournoux, and Atlas of the Cerebral Sulci by Ono, Kubik, and Abernathey. Three-dimensional extensions of these atlases have been developed as well. All two- and three-dimensional atlases are mutually preregistered and may be interactively registered with an actual patient's data. An atlas-based neuroimaging system has been developed that provides support for reformatting, registration, visualization, navigation, image processing, and quantification of clinical data. The anatomical index contains about 1,000 structures and over 400 sulcal patterns. Several new applications of the brain atlas database also have been developed, supported by various technologies such as virtual reality, the Internet, and electronic publishing. Fusion of information from multiple atlases assists the user in comprehensively understanding brain structures and identifying and quantifying anatomical regions in clinical data. The multiple brain atlas database and atlas-based neuroimaging system have substantial potential impact in stereotactic neurosurgery and radiotherapy by assisting in visualization and real-time manipulation in three dimensions of anatomical structures, in quantitative neuroradiology by allowing interactive analysis of clinical data, in three-dimensional neuroeducation, and in brain function studies.

  19. National Transportation Atlas Databases : 2012.

    Science.gov (United States)

    2012-01-01

    The National Transportation Atlas Databases 2012 (NTAD2012) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  20. National Transportation Atlas Databases : 2011.

    Science.gov (United States)

    2011-01-01

    The National Transportation Atlas Databases 2011 (NTAD2011) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  1. National Transportation Atlas Databases : 2009.

    Science.gov (United States)

    2009-01-01

    The National Transportation Atlas Databases 2009 (NTAD2009) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  2. National Transportation Atlas Databases : 2010.

    Science.gov (United States)

    2010-01-01

    The National Transportation Atlas Databases 2010 (NTAD2010) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...

  3. A tool for conditions tag management in ATLAS

    International Nuclear Information System (INIS)

    Sharmazanashvili, A; Batiashvili, G; Gvaberidze, G; Shekriladze, L; Formica, A

    2014-01-01

    ATLAS Conditions data include about 2 TB in a relational database and 400 GB of files referenced from the database. Conditions data is entered and retrieved using COOL, the API for accessing data in the LCG Conditions Database infrastructure. It is managed using an ATLAS-customized python based tool set. Conditions data are required for every reconstruction and simulation job, so access to them is crucial for all aspects of ATLAS data taking and analysis, as well as by preceding tasks to derive optimal corrections to reconstruction. Optimized sets of conditions for processing are accomplished using strict version control on those conditions: a process which assigns COOL Tags to sets of conditions, and then unifies those conditions over data-taking intervals into a COOL Global Tag. This Global Tag identifies the set of conditions used to process data so that the underlying conditions can be uniquely identified with 100% reproducibility should the processing be executed again. Understanding shifts in the underlying conditions from one tag to another and ensuring interval completeness for all detectors for a set of runs to be processed is a complex task, requiring tools beyond the above mentioned python utilities. Therefore, a JavaScript /PHP based utility called the Conditions Tag Browser (CTB) has been developed. CTB gives detector and conditions experts the possibility to navigate through the different databases and COOL folders; explore the content of given tags and the differences between them, as well as their extent in time; visualize the content of channels associated with leaf tags. This report describes the structure and PHP/ JavaScript classes of functions of the CTB.

  4. Update History of This Database - TP Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us TP Atlas Update History of This Database Date Update contents 2013/12/16 The email address i...s ( http://www.tanpaku.org/tpatlas/ ) is opened. About This Database Database Description Download License Update History of Thi...s Database Site Policy | Contact Us Update History of This Database - TP Atlas | LSDB Archive ... ...n the contact information is corrected. 2013/11/19 TP Atlas English archive site is opened. 2008/4/1 TP Atla

  5. Update History of This Database - AT Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us AT Atlas Update History of This Database Date Update contents 2013/12/16 The email address i... ( http://www.tanpaku.org/atatlas/ ) is opened. About This Database Database Description Download License Update History of This Data...base Site Policy | Contact Us Update History of This Database - AT Atlas | LSDB Archive ...

  6. A RESTful Web service interface to the ATLAS COOL database

    International Nuclear Information System (INIS)

    Roe, S A

    2010-01-01

    The COOL database in ATLAS is primarily used for storing detector conditions data, but also status flags which are uploaded summaries of information to indicate the detector reliability during a run. This paper introduces the use of CherryPy, a Python application server which acts as an intermediate layer between a web interface and the database, providing a simple means of storing to and retrieving from the COOL database which has found use in many web applications. The software layer is designed to be RESTful, implementing the common CRUD (Create, Read, Update, Delete) database methods by means of interpreting the HTTP method (POST, GET, PUT, DELETE) on the server along with a URL identifying the database resource to be operated on. The format of the data (text, xml etc) is also determined by the HTTP protocol. The details of this layer are described along with a popular application demonstrating its use, the ATLAS run list web page.

  7. Primary Numbers Database for ATLAS Detector Description Parameters

    CERN Document Server

    Vaniachine, A; Malon, D; Nevski, P; Wenaus, T

    2003-01-01

    We present the design and the status of the database for detector description parameters in ATLAS experiment. The ATLAS Primary Numbers are the parameters defining the detector geometry and digitization in simulations, as well as certain reconstruction parameters. Since the detailed ATLAS detector description needs more than 10,000 such parameters, a preferred solution is to have a single verified source for all these data. The database stores the data dictionary for each parameter collection object, providing schema evolution support for object-based retrieval of parameters. The same Primary Numbers are served to many different clients accessing the database: the ATLAS software framework Athena, the Geant3 heritage framework Atlsim, the Geant4 developers framework FADS/Goofy, the generator of XML output for detector description, and several end-user clients for interactive data navigation, including web-based browsers and ROOT. The choice of the MySQL database product for the implementation provides addition...

  8. Main - AT Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ... name: at_atlas_en.zip File URL: ftp://ftp.biosciencedbc.jp/archive/at_atlas/LATE... Database Description Download License Update History of This Database Site Policy | Contact Us Main - AT Atlas | LSDB Archive ...

  9. National Transportation Atlas Databases : 2013.

    Science.gov (United States)

    2013-01-01

    The National Transportation Atlas Databases 2013 (NTAD2013) is a set of nationwide geographic datasets of transportation facilities, transportation networks, associated infrastructure, and other political and administrative entities. These datasets i...

  10. National Transportation Atlas Databases : 2015.

    Science.gov (United States)

    2015-01-01

    The National Transportation Atlas Databases 2015 (NTAD2015) is a set of nationwide geographic datasets of transportation facilities, transportation networks, associated infrastructure, and other political and administrative entities. These datasets i...

  11. National Transportation Atlas Databases : 2014.

    Science.gov (United States)

    2014-01-01

    The National Transportation Atlas Databases 2014 (NTAD2014) is a set of nationwide geographic datasets of transportation facilities, transportation networks, associated infrastructure, and other political and administrative entities. These datasets i...

  12. Protein - AT Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ..._protein.zip File URL: ftp://ftp.biosciencedbc.jp/archive/at_atlas/LATEST/at_atla...About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Protein - AT Atlas | LSDB Archive ...

  13. Analysis list - ChIP-Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...://ftp.biosciencedbc.jp/archive/chip-atlas/LATEST/chip_atlas_analysis_list.zip File size: 44.8 KB Simple sea...e class. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Analysis list - ChIP-Atlas | LSDB Archive ...

  14. Conditions and configuration metadata for the ATLAS experiment

    International Nuclear Information System (INIS)

    Gallas, E J; Pachal, K E; Tseng, J C L; Albrand, S; Fulachier, J; Lambert, F; Zhang, Q

    2012-01-01

    In the ATLAS experiment, a system called COMA (Conditions/Configuration Metadata for ATLAS), has been developed to make globally important run-level metadata more readily accessible. It is based on a relational database storing directly extracted, refined, reduced, and derived information from system-specific database sources as well as information from non-database sources. This information facilitates a variety of unique dynamic interfaces and provides information to enhance the functionality of other systems. This presentation will give an overview of the components of the COMA system, enumerate its diverse data sources, and give examples of some of the interfaces it facilitates. We list important principles behind COMA schema and interface design, and how features of these principles create coherence and eliminate redundancy among the components of the overall system. In addition, we elucidate how interface logging data has been used to refine COMA content and improve the value and performance of end-user reports and browsers.

  15. Conditions and configuration metadata for the ATLAS experiment

    CERN Document Server

    Gallas, E J; Albrand, S; Fulachier, J; Lambert, F; Pachal, K E; Tseng, J C L; Zhang, Q

    2012-01-01

    In the ATLAS experiment, a system called COMA (Conditions/Configuration Metadata for ATLAS), has been developed to make globally important run-level metadata more readily accessible. It is based on a relational database storing directly extracted, refined, reduced, and derived information from system-specific database sources as well as information from non-database sources. This information facilitates a variety of unique dynamic interfaces and provides information to enhance the functionality of other systems. This presentation will give an overview of the components of the COMA system, enumerate its diverse data sources, and give examples of some of the interfaces it facilitates. We list important principles behind COMA schema and interface design, and how features of these principles create coherence and eliminate redundancy among the components of the overall system. In addition, we elucidate how interface logging data has been used to refine COMA content and improve the value and performance of end-user...

  16. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  17. Protein - TP Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...p_atlas_protein.zip File URL: ftp://ftp.biosciencedbc.jp/archive/tp_atlas/LATEST/...story of This Database Site Policy | Contact Us Protein - TP Atlas | LSDB Archive ...

  18. PREIMS - AT Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...Targeted Proteins Research Program (TPRP). Data file File name: at_atlas_preims.zip File URL: ftp://ftp.biosciencedbc.jp/archiv...base Database Description Download License Update History of This Database Site Policy | Contact Us PREIMS - AT Atlas | LSDB Archive ...

  19. Main - TP Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...me: tp_atlas_en.zip File URL: ftp://ftp.biosciencedbc.jp/archive/tp_atlas/LATEST/...d License Update History of This Database Site Policy | Contact Us Main - TP Atlas | LSDB Archive ...

  20. Windows on the brain: the emerging role of atlases and databases in neuroscience

    Science.gov (United States)

    Van Essen, David C.; VanEssen, D. C. (Principal Investigator)

    2002-01-01

    Brain atlases and associated databases have great potential as gateways for navigating, accessing, and visualizing a wide range of neuroscientific data. Recent progress towards realizing this potential includes the establishment of probabilistic atlases, surface-based atlases and associated databases, combined with improvements in visualization capabilities and internet access.

  1. Experience with ATLAS MySQL PanDA database service

    International Nuclear Information System (INIS)

    Smirnov, Y; Wlodek, T; Hover, J; Smith, J; Wenaus, T; Yu, D; De, K; Ozturk, N

    2010-01-01

    The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.

  2. Experience with ATLAS MySQL PanDA database service

    Energy Technology Data Exchange (ETDEWEB)

    Smirnov, Y; Wlodek, T; Hover, J; Smith, J; Wenaus, T; Yu, D [Physics Department, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States); De, K; Ozturk, N [Department of Physics, University of Texas at Arlington, Arlington, TX, 76019 (United States)

    2010-04-01

    The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.

  3. RiceAtlas, a spatial database of global rice calendars and production.

    Science.gov (United States)

    Laborte, Alice G; Gutierrez, Mary Anne; Balanza, Jane Girly; Saito, Kazuki; Zwart, Sander J; Boschetti, Mirco; Murty, M V R; Villano, Lorena; Aunario, Jorrel Khalil; Reinke, Russell; Koo, Jawoo; Hijmans, Robert J; Nelson, Andrew

    2017-05-30

    Knowing where, when, and how much rice is planted and harvested is crucial information for understanding the effects of policy, trade, and global and technological change on food security. We developed RiceAtlas, a spatial database on the seasonal distribution of the world's rice production. It consists of data on rice planting and harvesting dates by growing season and estimates of monthly production for all rice-producing countries. Sources used for planting and harvesting dates include global and regional databases, national publications, online reports, and expert knowledge. Monthly production data were estimated based on annual or seasonal production statistics, and planting and harvesting dates. RiceAtlas has 2,725 spatial units. Compared with available global crop calendars, RiceAtlas is nearly ten times more spatially detailed and has nearly seven times more spatial units, with at least two seasons of calendar data, making RiceAtlas the most comprehensive and detailed spatial database on rice calendar and production.

  4. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection

    Energy Technology Data Exchange (ETDEWEB)

    Zhuang, Xiahai, E-mail: zhuangxiahai@sjtu.edu.cn; Qian, Xiaohua [SJTU-CU International Cooperative Research Center, Department of Engineering Mechanics, School of Naval Architecture Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Bai, Wenjia; Shi, Wenzhe; Rueckert, Daniel [Biomedical Image Analysis Group, Department of Computing, Imperial College London, 180 Queens Gate, London SW7 2AZ (United Kingdom); Song, Jingjing; Zhan, Songhua [Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, Shanghai 201203 (China); Lian, Yanyun [Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210 (China)

    2015-07-15

    Purpose: Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Methods: Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors’ proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. Results: The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve

  5. License - AT Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ... might be changed without notice. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - AT Atlas | LSDB Archive ...

  6. License - TP Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ... might be changed without notice. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - TP Atlas | LSDB Archive ...

  7. Evolution of the use of relational and NoSQL databases in the ATLAS experiment

    Science.gov (United States)

    Barberis, D.

    2016-09-01

    The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of "NoSQL" databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to be orchestrated by specialised services that run on front-end machines and shield the user from the complexity of data storage infrastructure. This paper describes this technology evolution in the ATLAS database infrastructure and presents a few examples of large database applications that benefit from it.

  8. Optimizing access to conditions data in ATLAS event data processing

    CERN Document Server

    Rinaldi, Lorenzo; The ATLAS collaboration

    2018-01-01

    The processing of ATLAS event data requires access to conditions data which is stored in database systems. This data includes, for example alignment, calibration, and configuration information that may be characterized by large volumes, diverse content, and/or information which evolves over time as refinements are made in those conditions. Additional layers of complexity are added by the need to provide this information across the world-wide ATLAS computing grid and the sheer number of simultaneously executing processes on the grid, each demanding a unique set of conditions to proceed. Distributing this data to all the processes that require it in an efficient manner has proven to be an increasing challenge with the growing needs and number of event-wise tasks. In this presentation, we briefly describe the systems in which we have collected information about the use of conditions in event data processing. We then proceed to explain how this information has been used to refine not only reconstruction software ...

  9. Integration of the ATLAS tag database with data management and analysis components

    Energy Technology Data Exchange (ETDEWEB)

    Cranshaw, J; Malon, D [Argonne National Laboratory, Argonne, IL 60439 (United States); Doyle, A T; Kenyon, M J; McGlone, H; Nicholson, C [Department of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ, Scotland (United Kingdom)], E-mail: c.nicholson@physics.gla.ac.uk

    2008-07-15

    The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be most useful it needs to be integrated with the distributed data management (DDM) and distributed analysis (DA) components. This means addressing the issue that the DDM system at ATLAS groups files into datasets for scalability and usability, whereas the Tag Database points to events in files. It also means setting up a system which could prepare a list of input events and use both the DDM and DA systems to run a set of jobs. The ATLAS Tag Navigator Tool (TNT) has been developed to address these issues in an integrated way and provide a tool that the average physicist can use. Here, the current status of this work is presented and areas of future work are highlighted.

  10. Integration of the ATLAS tag database with data management and analysis components

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Doyle, A T; Kenyon, M J; McGlone, H; Nicholson, C

    2008-01-01

    The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be most useful it needs to be integrated with the distributed data management (DDM) and distributed analysis (DA) components. This means addressing the issue that the DDM system at ATLAS groups files into datasets for scalability and usability, whereas the Tag Database points to events in files. It also means setting up a system which could prepare a list of input events and use both the DDM and DA systems to run a set of jobs. The ATLAS Tag Navigator Tool (TNT) has been developed to address these issues in an integrated way and provide a tool that the average physicist can use. Here, the current status of this work is presented and areas of future work are highlighted

  11. Evolution of ATLAS conditions data and its management for LHC Run-2

    CERN Document Server

    Boehler, Michael; Formica, Andrea; Gallas, Elizabeth; Radescu, Voica

    2015-01-01

    The ATLAS detector at the LHC consists of several sub-detector systems. Both data taking and Monte Carlo (MC) simulation rely on an accurate description of the detector conditions from every subsystem, such as calibration constants, different scenarios of pile-up and noise conditions, size and position of the beam spot, etc. In order to guarantee database availability for critical online applications during data-taking, two database systems, one for online access and another one for all other database access have been implemented. The long shutdown period has provided the opportunity to review and improve the Run-1 system: revise workflows, include new and innovative monitoring and maintenance tools and implement a new database instance for Run-2 conditions data. The detector conditions are organized by tag identification strings and managed independently from the different sub-detector experts. The individual tags are then collected and associated into a global conditions tag, assuring synchronization of var...

  12. The ATLAS Wide-Range Database & Application Monitoring

    CERN Document Server

    Vasileva, Petya Tsvetanova; The ATLAS collaboration

    2018-01-01

    In HEP experiments at LHC the database applications often become complex by reflecting the ever demanding requirements of the researchers. The ATLAS experiment has several Oracle DB clusters with over 216 database schemes each with its own set of database objects. To effectively monitor them, we designed a modern and portable application with exceptionally good characteristics. Some of them include: concise view of the most important DB metrics; top SQL statements based on CPU, executions, block reads, etc.; volume growth plots per schema and DB object type; database jobs section with signaling for problematic ones; in-depth analysis in case of contention on data or processes. This contribution describes also the technical aspects of the implementation. The project can be separated into three independent layers. The first layer consists in highly-optimized database objects hiding all complicated calculations. The second layer represents a server providing REST access to the underlying database backend. The th...

  13. License - ChIP-Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us ChIP-Atlas License License to Use This Database Last updated : 2016/06/24 You may use this database...e license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Commons Attributio...n-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...national is found here . With regard to this database, you are licensed to: freely access part or whole of this database

  14. TransAtlasDB: an integrated database connecting expression data, metadata and variants

    Science.gov (United States)

    Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J

    2018-01-01

    Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361

  15. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection.

    Science.gov (United States)

    Zhuang, Xiahai; Bai, Wenjia; Song, Jingjing; Zhan, Songhua; Qian, Xiaohua; Shi, Wenzhe; Lian, Yanyun; Rueckert, Daniel

    2015-07-01

    Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors' proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p ranking study, the proposed criterion based on conditional entropy yielded a performance curve with higher WHS Dice scores compared to the conventional schemes (p ranking algorithm and joint label fusion, the MAS scheme is able to generate accurate segmentation

  16. Evolution of ATLAS conditions data and its management for LHC Run-2

    International Nuclear Information System (INIS)

    Böhler, Michael; Borodin, Mikhail; Formica, Andrea; Gallas, Elizabeth; Radescu, Voica

    2015-01-01

    The ATLAS detector at the LHC consists of several sub-detector systems. Both data taking and Monte Carlo (MC) simulation rely on an accurate description of the detector conditions from every subsystem, such as calibration constants, different scenarios of pile-up and noise conditions, size and position of the beam spot, etc. In order to guarantee database availability for critical online applications during data-taking, two database systems, one for online access and another one for all other database access, have been implemented.The long shutdown period has provided the opportunity to review and improve the Run-1 system: revise workflows, include new and innovative monitoring and maintenance tools and implement a new database instance for Run-2 conditions data. The detector conditions are organized by tag identification strings and managed independently by the different sub-detector experts. The individual tags are then collected and associated into a global conditions tag, assuring synchronization of various sub-detector improvements. Furthermore, a new concept was introduced to maintain conditions over all different data run periods into a single tag, by using Interval of Validity (IOV) dependent detector conditions for the MC database as well. This allows on the fly preservation of past conditions for data and MC and assures their sustainability with software evolution.This paper presents an overview of the commissioning of the new database instance, improved tools and workflows, and summarizes the actions taken during the Run-2 commissioning phase in the beginning of 2015. (paper)

  17. Verification of ICESat-2/ATLAS Science Receiver Algorithm Onboard Databases

    Science.gov (United States)

    Carabajal, C. C.; Saba, J. L.; Leigh, H. W.; Magruder, L. A.; Urban, T. J.; Mcgarry, J.; Schutz, B. E.

    2013-12-01

    NASA's ICESat-2 mission will fly the Advanced Topographic Laser Altimetry System (ATLAS) instrument on a 3-year mission scheduled to launch in 2016. ATLAS is a single-photon detection system transmitting at 532nm with a laser repetition rate of 10 kHz, and a 6 spot pattern on the Earth's surface. A set of onboard Receiver Algorithms will perform signal processing to reduce the data rate and data volume to acceptable levels. These Algorithms distinguish surface echoes from the background noise, limit the daily data volume, and allow the instrument to telemeter only a small vertical region about the signal. For this purpose, three onboard databases are used: a Surface Reference Map (SRM), a Digital Elevation Model (DEM), and a Digital Relief Maps (DRMs). The DEM provides minimum and maximum heights that limit the signal search region of the onboard algorithms, including a margin for errors in the source databases, and onboard geolocation. Since the surface echoes will be correlated while noise will be randomly distributed, the signal location is found by histogramming the received event times and identifying the histogram bins with statistically significant counts. Once the signal location has been established, the onboard Digital Relief Maps (DRMs) will be used to determine the vertical width of the telemetry band about the signal. University of Texas-Center for Space Research (UT-CSR) is developing the ICESat-2 onboard databases, which are currently being tested using preliminary versions and equivalent representations of elevation ranges and relief more recently developed at Goddard Space Flight Center (GSFC). Global and regional elevation models have been assessed in terms of their accuracy using ICESat geodetic control, and have been used to develop equivalent representations of the onboard databases for testing against the UT-CSR databases, with special emphasis on the ice sheet regions. A series of verification checks have been implemented, including

  18. Atlas of Iberian water beetles (ESACIB database).

    Science.gov (United States)

    Sánchez-Fernández, David; Millán, Andrés; Abellán, Pedro; Picazo, Félix; Carbonell, José A; Ribera, Ignacio

    2015-01-01

    The ESACIB ('EScarabajos ACuáticos IBéricos') database is provided, including all available distributional data of Iberian and Balearic water beetles from the literature up to 2013, as well as from museum and private collections, PhD theses, and other unpublished sources. The database contains 62,015 records with associated geographic data (10×10 km UTM squares) for 488 species and subspecies of water beetles, 120 of them endemic to the Iberian Peninsula and eight to the Balearic Islands. This database was used for the elaboration of the "Atlas de los Coleópteros Acuáticos de España Peninsular". In this dataset data of 15 additional species has been added: 11 that occur in the Balearic Islands or mainland Portugal but not in peninsular Spain and an other four with mainly terrestrial habits within the genus Helophorus (for taxonomic coherence). The complete dataset is provided in Darwin Core Archive format.

  19. Atlas of Iberian water beetles (ESACIB database)

    Science.gov (United States)

    Sánchez-Fernández, David; Millán, Andrés; Abellán, Pedro; Picazo, Félix; Carbonell, José A.; Ribera, Ignacio

    2015-01-01

    Abstract The ESACIB (‘EScarabajos ACuáticos IBéricos’) database is provided, including all available distributional data of Iberian and Balearic water beetles from the literature up to 2013, as well as from museum and private collections, PhD theses, and other unpublished sources. The database contains 62,015 records with associated geographic data (10×10 km UTM squares) for 488 species and subspecies of water beetles, 120 of them endemic to the Iberian Peninsula and eight to the Balearic Islands. This database was used for the elaboration of the “Atlas de los Coleópteros Acuáticos de España Peninsular”. In this dataset data of 15 additional species has been added: 11 that occur in the Balearic Islands or mainland Portugal but not in peninsular Spain and an other four with mainly terrestrial habits within the genus Helophorus (for taxonomic coherence). The complete dataset is provided in Darwin Core Archive format. PMID:26448717

  20. ATLAS diamond Beam Condition Monitor

    CERN Document Server

    Gorišek, A; Dolenc, I; Frais-Kölbl, H; Griesmayer, E; Kagan, H; Korpar, S; Kramberger, G; Mandic, I; Meyer, M; Mikuz, M; Pernegger, H; Smith, S; Trischuk, W; Weilhammer, P; Zavrtanik, M

    2007-01-01

    The ATLAS experiment has chosen to use diamond for its Beam Condition Monitor (BCM) given its radiation hardness, low capacitance and short charge collection time. In addition, due to low leakage current diamonds do not require cooling. The ATLAS Beam Condition Monitoring system is based on single beam bunch crossing measurements rather than integrating the accumulated particle flux. Its fast electronics will allow separation of LHC collisions from background events such as beam gas interactions or beam accidents. There will be two stations placed symmetrically about the interaction point along the beam axis at . Timing of signals from the two stations will provide almost ideal separation of beam–beam interactions and background events. The ATLAS BCM module consists of diamond pad detectors of area and thickness coupled to a two-stage RF current amplifier. The production of the final detector modules is almost done. A S/N ratio of 10:1 has been achieved with minimum ionizing particles (MIPs) in the test bea...

  1. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    International Nuclear Information System (INIS)

    Viegas, F; Nairz, A; Goossens, L; Malon, D; Cranshaw, J; Dimitrov, G; Nowak, M; Gamboa, C; Gallas, E; Wong, A; Vinek, E

    2010-01-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  2. Evolution of the use of relational and NoSQL databases in the ATLAS experiment

    CERN Document Server

    Barberis, Dario; The ATLAS collaboration

    2015-01-01

    The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of “NoSQL” databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to...

  3. Evolution of the use of relational and NoSQL databases in the ATLAS experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00064378; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of “NoSQL” databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to...

  4. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    Energy Technology Data Exchange (ETDEWEB)

    Viegas, F; Nairz, A; Goossens, L [CERN, CH-1211 Geneve 23 (Switzerland); Malon, D; Cranshaw, J [Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Dimitrov, G [DESY, D-22603 Hamburg (Germany); Nowak, M; Gamboa, C [Brookhaven National Laboratory, PO Box 5000 Upton, NY 11973-5000 (United States); Gallas, E [University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Wong, A [Triumf, 4004 Wesbrook Mall, Vancouver, BC, V6T 2A3 (Canada); Vinek, E [University of Vienna, Dr.-Karl-Lueger-Ring 1, 1010 Vienna (Austria)

    2010-04-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  5. RiceAtlas, a spatial database of global rice calendars and production

    NARCIS (Netherlands)

    Laborte, Alice G.; Gutierrez, Mary Anne; Balanza, Jane Girly; Saito, Kazuki; Zwart, Sander; Boschetti, Mirco; Murty, M. V.R.; Villano, Lorena; Aunario, Jorrel Khalil; Reinke, Russell; Koo, Jawoo; Hijmans, Robert J.; Nelson, Andrew

    2017-01-01

    Knowing where, when, and how much rice is planted and harvested is crucial information for understanding the effects of policy, trade, and global and technological change on food security. We developed RiceAtlas, a spatial database on the seasonal distribution of the world's rice production. It

  6. ATLAS diamond Beam Condition Monitor

    Energy Technology Data Exchange (ETDEWEB)

    Gorisek, A. [CERN (Switzerland)]. E-mail: andrej.gorisek@cern.ch; Cindro, V. [J. Stefan Institute (Slovenia); Dolenc, I. [J. Stefan Institute (Slovenia); Frais-Koelbl, H. [Fotec (Austria); Griesmayer, E. [Fotec (Austria); Kagan, H. [Ohio State University, OH (United States); Korpar, S. [J. Stefan Institute (Slovenia); Kramberger, G. [J. Stefan Institute (Slovenia); Mandic, I. [J. Stefan Institute (Slovenia); Meyer, M. [CERN (Switzerland); Mikuz, M. [J. Stefan Institute (Slovenia); Pernegger, H. [CERN (Switzerland); Smith, S. [Ohio State University, OH (United States); Trischuk, W. [University of Toronto (Canada); Weilhammer, P. [CERN (Switzerland); Zavrtanik, M. [J. Stefan Institute (Slovenia)

    2007-03-01

    The ATLAS experiment has chosen to use diamond for its Beam Condition Monitor (BCM) given its radiation hardness, low capacitance and short charge collection time. In addition, due to low leakage current diamonds do not require cooling. The ATLAS Beam Condition Monitoring system is based on single beam bunch crossing measurements rather than integrating the accumulated particle flux. Its fast electronics will allow separation of LHC collisions from background events such as beam gas interactions or beam accidents. There will be two stations placed symmetrically about the interaction point along the beam axis at z=+/-183.8cm. Timing of signals from the two stations will provide almost ideal separation of beam-beam interactions and background events. The ATLAS BCM module consists of diamond pad detectors of 1cm{sup 2} area and 500{mu}m thickness coupled to a two-stage RF current amplifier. The production of the final detector modules is almost done. A S/N ratio of 10:1 has been achieved with minimum ionizing particles (MIPs) in the test beam setup at KEK. Results from the test beams and bench measurements are presented.

  7. ATLAS diamond Beam Condition Monitor

    International Nuclear Information System (INIS)

    Gorisek, A.; Cindro, V.; Dolenc, I.; Frais-Koelbl, H.; Griesmayer, E.; Kagan, H.; Korpar, S.; Kramberger, G.; Mandic, I.; Meyer, M.; Mikuz, M.; Pernegger, H.; Smith, S.; Trischuk, W.; Weilhammer, P.; Zavrtanik, M.

    2007-01-01

    The ATLAS experiment has chosen to use diamond for its Beam Condition Monitor (BCM) given its radiation hardness, low capacitance and short charge collection time. In addition, due to low leakage current diamonds do not require cooling. The ATLAS Beam Condition Monitoring system is based on single beam bunch crossing measurements rather than integrating the accumulated particle flux. Its fast electronics will allow separation of LHC collisions from background events such as beam gas interactions or beam accidents. There will be two stations placed symmetrically about the interaction point along the beam axis at z=+/-183.8cm. Timing of signals from the two stations will provide almost ideal separation of beam-beam interactions and background events. The ATLAS BCM module consists of diamond pad detectors of 1cm 2 area and 500μm thickness coupled to a two-stage RF current amplifier. The production of the final detector modules is almost done. A S/N ratio of 10:1 has been achieved with minimum ionizing particles (MIPs) in the test beam setup at KEK. Results from the test beams and bench measurements are presented

  8. Operation and Performance of the ATLAS Muon Spectrometer Databases during 2011-12 Data Taking

    CERN Document Server

    Verducci, Monica

    2014-01-01

    The size and complexity of the ATLAS experiment at the Large Hadron Collider, including its Muon Spectrometer, raise unprecedented challenges in terms of operation, software model and data management. One of the challenging tasks is the storage of non-event data produced by the calibration and alignment stream processes and by online and offline monitoring frameworks, which can unveil problems in the detector hardware and in the data processing chain. During 2011 and 2012 data taking, the software model and data processing enabled high quality track resolution as a better understanding of the detector performance was developed using the most reliable detector simulation and reconstruction. This work summarises the various aspects of the Muon Spectrometer Databases, with particular emphasis given to the Conditions Databases and their usage in the data analysis.

  9. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Document Server

    Dykstra, David

    2012-01-01

    One of the main attractions of non-relational "NoSQL" databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also has high scalability and wide-area distributability for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  10. Lecture 9: Oracle Databases at CERN

    CERN Multimedia

    CERN. Geneva; Limper, Maaike

    2013-01-01

    She participated in the analysis of the first LHC data in a variety of ways: she worked on the construction of the ATLAS silicon tracker, wrote new data reconstruction software and developed some of the databases that store information on the ATLAS data-taking conditions. As of January 2012, Maaike joined the CERN IT Databases group as a CERN openlab Fellow funded by Oracle to help investigate the possib...

  11. Designing a future Conditions Database based on LHC experience

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00064378; Formica, Andrea; Gallas, Elizabeth; Lehmann Miotto, Giovanna; Pfeiffer, A.; Govi, G.

    2015-01-01

    We describe a proposal for a new Conditions Database infrastructure that ATLAS and CMS (or other) experiments could use starting on the timescale of Run 3. This proposal is based on the experience that both experiments accumulated during Run 1. We will present the identified relevant data flows for conditions data and underline the common use cases that lead to a joint effort for the development of a new system. Conditions data are needed in any scientific experiment. It includes any ancillary data associated with primary data taking such as detector configuration, state or calibration or the environment in which the detector is operating. In any non-trivial experiment, conditions data typically reside outside the primary data store for various reasons (size, complexity or availability) and are best accessed at the point of processing or analysis (including for Monte Carlo simulations). The ability of any experiment to produce correct and timely results depends on the complete and efficient availability of ne...

  12. Designing a future Conditions Database based on LHC experience

    CERN Document Server

    Formica, Andrea; The ATLAS collaboration; Gallas, Elizabeth; Govi, Giacomo; Lehmann Miotto, Giovanna; Pfeiffer, Andreas

    2015-01-01

    The ATLAS and CMS Conditions Database infrastructures have served each of the respective experiments well through LHC Run 1, providing efficient access to a wide variety of conditions information needed in online data taking and offline processing and analysis. During the long shutdown between Run 1 and Run 2, we have taken various measures to improve our systems for Run 2. In some cases, a drastic change was not possible because of the relatively short time scale to prepare for Run 2. In this process, and in the process of comparing to the systems used by other experiments, we realized that for Run 3, we should consider more fundamental changes and possibilities. We seek changes which would streamline conditions data management, improve monitoring tools, better integrate the use of metadata, incorporate analytics to better understand conditions usage, as well as investigate fundamental changes in the storage technology, which might be more efficient while minimizing maintenance of the data as well as simplif...

  13. COOL, LCG Conditions Database for the LHC Experiments Development and Deployment Status

    CERN Document Server

    Valassi, A; Clemencic, M; Pucciani, G; Schmidt, S A; Wache, M; CERN. Geneva. IT Department, DM

    2009-01-01

    The COOL project provides common software components and tools for the handling of the conditions data of the LHC experiments. It is part of the LCG Persistency Framework (PF), a broader project set up within the context of the LCG Application Area (AA) to devise common persistency solutions for the LHC experiments. COOL software development is the result of the collaboration between the CERN IT Department and ATLAS and LHCb, the two experiments that have chosen it as the basis of their conditions database infrastructure. COOL supports conditions data persistency using several relational technologies (Oracle, MySQL, SQLite and FroNTier), based on the CORAL Common Relational Abstraction Layer. For both experiments, Oracle is the backend used for the deployment of COOL database services at Tier0 and Tier1 sites of the LHC Computing Grid. While the development of new software functionalities is being frozen as LHC operations are ramping up, the main focus for the project in 2008 has shifted to performance optimi...

  14. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Science.gov (United States)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  15. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2012-01-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  16. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, Dave [Fermilab

    2012-07-20

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  17. The ATLAS Beam Conditions Monitor

    International Nuclear Information System (INIS)

    Cindro, V; Dolenc, I; Kramberger, G; Macek, B; Mandic, I; Mikuz', M; Zavrtanik, M; Dobos, D; Gorisek, A; Pernegger, H; Weilhammer, P; Frais-Koelbl, H; Griesmayer, E; Niegl, M; Kagan, H; Tardif, D; Trischuk, W

    2008-01-01

    Beam conditions and the potential detector damage resulting from their anomalies have pushed the LHC experiments to build their own beam monitoring devices. The ATLAS Beam Conditions Monitor (BCM) consists of two stations (forward and backward) of detectors each with four modules. The sensors are required to tolerate doses up to 500 kGy and in excess of 10 15 charged particles per cm 2 over the lifetime of the experiment. Each module includes two diamond sensors read out in parallel. The stations are located symmetrically around the interaction point, positioning the diamond sensors at z = ±184 cm and r = 55 mm (a pseudo- rapidity of about 4.2). Equipped with fast electronics (2 ns rise time) these stations measure time-of-flight and pulse height to distinguish events resulting from lost beam particles from those normally occurring in proton-proton interactions. The BCM also provides a measurement of bunch-by-bunch luminosities in ATLAS by counting in-time and out-of-time collisions. Eleven detector modules have been fully assembled and tested. Tests performed range from characterisation of diamond sensors to full module tests with electron sources and in proton testbeams. Testbeam results from the CERN SPS show a module median-signal to noise of 11:1 for minimum ionising particles incident at a 45-degree angle. The best eight modules were installed on the ATLAS pixel support frame that was inserted into ATLAS in the summer of 2007. This paper describes the full BCM detector system along with simulation studies being used to develop the logic in the back-end FPGA coincidence hardware

  18. The ATLAS Beam Conditions Monitor

    Energy Technology Data Exchange (ETDEWEB)

    Cindro, V; Dolenc, I; Kramberger, G; Macek, B; Mandic, I; Mikuz' , M; Zavrtanik, M [Jozef Stefan Institute and Department of Physics, University of Ljubljana, Ljubljana (Slovenia); Dobos, D; Gorisek, A; Pernegger, H; Weilhammer, P [CERN, Geneva (Switzerland); Frais-Koelbl, H; Griesmayer, E; Niegl, M [University of Applied Sciences Wiener Neustadt and Fotec, Wiener Neustadt (Austria); Kagan, H [Ohio State University, Columbus (United States); Tardif, D; Trischuk, W [University of Toronto, Toronto (Canada)], E-mail: william@physics.utoronto.ca

    2008-02-15

    Beam conditions and the potential detector damage resulting from their anomalies have pushed the LHC experiments to build their own beam monitoring devices. The ATLAS Beam Conditions Monitor (BCM) consists of two stations (forward and backward) of detectors each with four modules. The sensors are required to tolerate doses up to 500 kGy and in excess of 10{sup 15} charged particles per cm{sup 2} over the lifetime of the experiment. Each module includes two diamond sensors read out in parallel. The stations are located symmetrically around the interaction point, positioning the diamond sensors at z = {+-}184 cm and r = 55 mm (a pseudo- rapidity of about 4.2). Equipped with fast electronics (2 ns rise time) these stations measure time-of-flight and pulse height to distinguish events resulting from lost beam particles from those normally occurring in proton-proton interactions. The BCM also provides a measurement of bunch-by-bunch luminosities in ATLAS by counting in-time and out-of-time collisions. Eleven detector modules have been fully assembled and tested. Tests performed range from characterisation of diamond sensors to full module tests with electron sources and in proton testbeams. Testbeam results from the CERN SPS show a module median-signal to noise of 11:1 for minimum ionising particles incident at a 45-degree angle. The best eight modules were installed on the ATLAS pixel support frame that was inserted into ATLAS in the summer of 2007. This paper describes the full BCM detector system along with simulation studies being used to develop the logic in the back-end FPGA coincidence hardware.

  19. Ajax, XSLT and SVG: Displaying ATLAS conditions data with new web technologies

    Energy Technology Data Exchange (ETDEWEB)

    Roe, S A, E-mail: shaun.roe@cern.c [CERN, CH-1211 Geneve 23 (Switzerland)

    2010-04-01

    The combination of three relatively recent technologies is described which allows an easy path from database retrieval to interactive web display. SQL queries on an Oracle database can be performed in a manner which directly return an XML description of the result, and Ajax techniques (Asynchronous JavaScript And XML) are used to dynamically inject the data into a web display accompanied by an XSLT transform template which determines how the data will be formatted. By tuning the transform to generate SVG (Scalable Vector Graphics) a direct graphical representation can be produced in the web page while retaining the database data as the XML source, allowing dynamic links to be generated in the web representation, but programmatic use of the data when used from a user application. With the release of the SVG 1.2 Tiny draft specification, the display can also be tailored for display on mobile devices. The technologies are described and a sample application demonstrated, showing conditions data from the ATLAS Semiconductor Tracker.

  20. Ajax, XSLT and SVG: Displaying ATLAS conditions data with new web technologies

    International Nuclear Information System (INIS)

    Roe, S A

    2010-01-01

    The combination of three relatively recent technologies is described which allows an easy path from database retrieval to interactive web display. SQL queries on an Oracle database can be performed in a manner which directly return an XML description of the result, and Ajax techniques (Asynchronous JavaScript And XML) are used to dynamically inject the data into a web display accompanied by an XSLT transform template which determines how the data will be formatted. By tuning the transform to generate SVG (Scalable Vector Graphics) a direct graphical representation can be produced in the web page while retaining the database data as the XML source, allowing dynamic links to be generated in the web representation, but programmatic use of the data when used from a user application. With the release of the SVG 1.2 Tiny draft specification, the display can also be tailored for display on mobile devices. The technologies are described and a sample application demonstrated, showing conditions data from the ATLAS Semiconductor Tracker.

  1. Ajax, XSLT and SVG: Displaying ATLAS conditions data with new web technologies

    CERN Document Server

    Roe, S A

    2010-01-01

    The combination of three relatively recent technologies is described which allows an easy path from database retrieval to interactive web display. SQL queries on an Oracle database can be performed in a manner which directly return an XML description of the result, and Ajax techniques (Asynchronous JavaScript And XML) are used to dynamically inject the data into a web display accompanied by an XSLT transform template which determines how the data will be formatted. By tuning the transform to generate SVG (Scalable Vector Graphics) a direct graphical representation can be produced in the web page while retaining the database data as the XML source, allowing dynamic links to be generated in the web representation, but programmatic use of the data when used from a user application. With the release of the SVG 1.2 Tiny draft specification, the display can also be tailored for display on mobile devices. The technologies are described and a sample application demonstrated, showing conditions data from the ATLAS Sem...

  2. AtlasT4SS: a curated database for type IV secretion systems.

    Science.gov (United States)

    Souza, Rangel C; del Rosario Quispe Saji, Guadalupe; Costa, Maiana O C; Netto, Diogo S; Lima, Nicholas C B; Klein, Cecília C; Vasconcelos, Ana Tereza R; Nicolás, Marisa F

    2012-08-09

    The type IV secretion system (T4SS) can be classified as a large family of macromolecule transporter systems, divided into three recognized sub-families, according to the well-known functions. The major sub-family is the conjugation system, which allows transfer of genetic material, such as a nucleoprotein, via cell contact among bacteria. Also, the conjugation system can transfer genetic material from bacteria to eukaryotic cells; such is the case with the T-DNA transfer of Agrobacterium tumefaciens to host plant cells. The system of effector protein transport constitutes the second sub-family, and the third one corresponds to the DNA uptake/release system. Genome analyses have revealed numerous T4SS in Bacteria and Archaea. The purpose of this work was to organize, classify, and integrate the T4SS data into a single database, called AtlasT4SS - the first public database devoted exclusively to this prokaryotic secretion system. The AtlasT4SS is a manual curated database that describes a large number of proteins related to the type IV secretion system reported so far in Gram-negative and Gram-positive bacteria, as well as in Archaea. The database was created using the RDBMS MySQL and the Catalyst Framework based in the Perl programming language and using the Model-View-Controller (MVC) design pattern for Web. The current version holds a comprehensive collection of 1,617 T4SS proteins from 58 Bacteria (49 Gram-negative and 9 Gram-Positive), one Archaea and 11 plasmids. By applying the bi-directional best hit (BBH) relationship in pairwise genome comparison, it was possible to obtain a core set of 134 clusters of orthologous genes encoding T4SS proteins. In our database we present one way of classifying orthologous groups of T4SSs in a hierarchical classification scheme with three levels. The first level comprises four classes that are based on the organization of genetic determinants, shared homologies, and evolutionary relationships: (i) F-T4SS, (ii) P-T4SS, (iii

  3. A JEE RESTful service to access Conditions Data in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081940; Gallas, Elizabeth

    2015-01-01

    Usage of Conditions Data in ATLAS is extensive for offline reconstruction and analysis (e.g.: alignment, calibration, data quality). The system is based on the LCG Conditions Database infrastructure, with read and write access via an ad hoc C++ API (COOL), a system which was developed before Run 1 data taking began. The infrastructure dictates that the data is organized into separate schemata (assigned to subsystems/groups storing distinct and independent sets of conditions), making it difficult to access information from several schemata at the same time. We have thus created PL/SQL functions containing queries to provide content extraction at multi-schema level. The PL/SQL API has been exposed to external clients by means of a Java application providing DB access via RESTful services, deployed inside an application server (JBoss WildFly). The services allow navigation over multiple schemata via simple URLs. The data can be retrieved either in XML or JSON formats, via simple clients (like curl or Web browser...

  4. A JEE RESTful service to access Conditions Data in ATLAS

    CERN Document Server

    Formica, Andrea; The ATLAS collaboration

    2015-01-01

    Usage of Conditions Data in ATLAS is extensive for offline reconstruction and analysis (for example: alignment, calibration, data quality). The system is based on the LCG Conditions Database infrastructure, with read and write access via an ad hoc C++ API (COOL), a system which was developed before Run 1 data taking began. The infrastructure dictates that the data is organized into separate schemas (assigned to subsystems/groups storing distinct and independent sets of conditions), making it difficult to access information from several schemas at the same time. We have thus created PL/SQL functions containing queries to provide content extraction at multi-schema level. The PL/SQL API has been exposed to external clients by means of an intermediate java application server (JBoss), where an application delivering access to the DB via RESTful services has been deployed. The services allow navigation over multiple schema content, via simple URLs. The queried data can be retrieved either in XML or JSON formats, vi...

  5. A JEE RESTful service to access Conditions Data in ATLAS

    Science.gov (United States)

    Formica, Andrea; Gallas, E. J.

    2015-12-01

    Usage of condition data in ATLAS is extensive for offline reconstruction and analysis (e.g. alignment, calibration, data quality). The system is based on the LCG Conditions Database infrastructure, with read and write access via an ad hoc C++ API (COOL), a system which was developed before Run 1 data taking began. The infrastructure dictates that the data is organized into separate schemas (assigned to subsystems/groups storing distinct and independent sets of conditions), making it difficult to access information from several schemas at the same time. We have thus created PL/SQL functions containing queries to provide content extraction at multi-schema level. The PL/SQL API has been exposed to external clients by means of a Java application providing DB access via REST services, deployed inside an application server (JBoss WildFly). The services allow navigation over multiple schemas via simple URLs. The data can be retrieved either in XML or JSON formats, via simple clients (like curl or Web browsers).

  6. Technical and organizational considerations for the long-term maintenance and development of digital brain atlases and web-based databases.

    Science.gov (United States)

    Ito, Kei

    2010-01-01

    Digital brain atlas is a kind of image database that specifically provide information about neurons and glial cells in the brain. It has various advantages that are unmatched by conventional paper-based atlases. Such advantages, however, may become disadvantages if appropriate cares are not taken. Because digital atlases can provide unlimited amount of data, they should be designed to minimize redundancy and keep consistency of the records that may be added incrementally by different staffs. The fact that digital atlases can easily be revised necessitates a system to assure that users can access previous versions that might have been cited in papers at a particular period. To inherit our knowledge to our descendants, such databases should be maintained for a very long period, well over 100 years, like printed books and papers. Technical and organizational measures to enable long-term archive should be considered seriously. Compared to the initial development of the database, subsequent efforts to increase the quality and quantity of its contents are not regarded highly, because such tasks do not materialize in the form of publications. This fact strongly discourages continuous expansion of, and external contributions to, the digital atlases after its initial launch. To solve these problems, the role of the biocurators is vital. Appreciation of the scientific achievements of the people who do not write papers, and establishment of the secure academic career path for them, are indispensable for recruiting talents for this very important job.

  7. LHCb distributed conditions database

    International Nuclear Information System (INIS)

    Clemencic, M

    2008-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here

  8. Metadata aided run selection at ATLAS

    International Nuclear Information System (INIS)

    Buckingham, R M; Gallas, E J; Tseng, J C-L; Viegas, F; Vinek, E

    2011-01-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called 'runBrowser' makes these Conditions Metadata available as a Run based selection service. runBrowser, based on PHP and JavaScript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attributes, but also gives the user information at each stage about the relationship between the conditions chosen and the remaining conditions criteria available. When a set of COMA selections are complete, runBrowser produces a human readable report as well as an XML file in a standardized ATLAS format. This XML can be saved for later use or refinement in a future runBrowser session, shared with physics/detector groups, or used as input to ELSSI (event level Metadata browser) or other ATLAS run or event processing services.

  9. The Database Driven ATLAS Trigger Configuration System

    CERN Document Server

    Martyniuk, Alex; The ATLAS collaboration

    2015-01-01

    This contribution describes the trigger selection configuration system of the ATLAS low- and high-level trigger (HLT) and the upgrades it received in preparation for LHC Run 2. The ATLAS trigger configuration system is responsible for applying the physics selection parameters for the online data taking at both trigger levels and the proper connection of the trigger lines across those levels. Here the low-level trigger consists of the already existing central trigger (CT) and the new Level-1 Topological trigger (L1Topo), which has been added for Run 2. In detail the tasks of the configuration system during the online data taking are Application of the selection criteria, e.g. energy cuts, minimum multiplicities, trigger object correlation, at the three trigger components L1Topo, CT, and HLT On-the-fly, e.g. rate-dependent, generation and application of prescale factors to the CT and HLT to adjust the trigger rates to the data taking conditions, such as falling luminosity or rate spikes in the detector readout ...

  10. Editor for Remote Database used in ATLAS Trigger/DAQ

    CERN Document Server

    Meessen, C; Valenta, J

    2006-01-01

    The poster gives brief summary of the ATLAS T/DAQ system, then it introduces the RDB database and describes the RDB Editor application, including its internal structure, GUI features, etc. The RDB Editor is an easy-to-use Java application which allows simple navigation between huge number of objects stored in the RDB. It supports bookmarks, histories, etc. in the way usual in the web browsers. Moreover, it is possible to enhance the application by specialized (graphical) viewers for objects of particular class which will allow the user to see, for example, details that are hard to spot in textual view. As an example of such a plug-in, viewer for EFD_Configuration class was developed.

  11. Glance Information System for ATLAS Management

    International Nuclear Information System (INIS)

    Grael, F F; Maidantchik, C; Évora, L H R A; Karam, K; Moraes, L O F; Cirilli, M; Nessi, M; Pommès, K

    2011-01-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  12. Glance Information System for ATLAS Management

    Science.gov (United States)

    Grael, F. F.; Maidantchik, C.; Évora, L. H. R. A.; Karam, K.; Moraes, L. O. F.; Cirilli, M.; Nessi, M.; Pommès, K.; ATLAS Collaboration

    2011-12-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  13. Distributed processing and analysis of ATLAS experimental data

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is taking data steadily since Autumn 2009, collecting close to 1 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the first...

  14. Distributed processing and analysis of ATLAS experimental data

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is taking data steadily since Autumn 2009, and collected so far over 5 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the...

  15. Conditions Data Handling In The Multithreaded ATLAS Framework

    CERN Document Server

    Leggett, Charles; The ATLAS collaboration

    2018-01-01

    In preparation for Run 3 of the LHC, the ATLAS experiment is migrating its offline software to use a multithreaded framework, which will allow multiple events to be processed simultaneously. This implies that the handling of non-event, time-dependent (conditions) data, such as calibrations and geometry, must also be extended to allow for multiple versions of such data to exist simultaneously. This has now been implemented as part of the new ATLAS framework. The detector geometry is included in this scheme by having sets of time-dependent displacements on top of a static base geometry.

  16. The geosystems of complex geographical atlases

    Directory of Open Access Journals (Sweden)

    Jovanović Jasmina

    2012-01-01

    Full Text Available Complex geographical atlases represent geosystems of different hierarchical rank, complexity and diversity, scale and connection. They represent a set of large number of different pieces of information about geospace. Also, they contain systematized, correlative and in the apparent form represented pieces of information about space. The degree of information revealed in the atlas is precisely explained by its content structure and the form of presentation. The quality of atlas depends on the method of visualization of data and the quality of geodata. Cartographic visualization represents cognitive process. The analysis converts geospatial data into knowledge. A complex geographical atlas represents information complex of spatial - temporal coordinated database on geosystems of different complexity and territorial scope. Each geographical atlas defines a concrete geosystem. Systemic organization (structural and contextual determines its complexity and concreteness. In complex atlases, the attributes of geosystems are modeled and pieces of information are given in systematized, graphically unique form. The atlas can be considered as a database. In composing a database, semantic analysis of data is important. The result of semantic modeling is expressed in structuring of data information, in emphasizing logic connections between phenomena and processes and in defining their classes according to the degree of similarity. Accordingly, the efficiency of research of needed pieces of information in the process of the database use is enabled. An atlas map has a special power to integrate sets of geodata and present information contents in user - friendly and understandable visual and tactile way using its visual ability. Composing an atlas by systemic cartography requires the pieces of information on concrete - defined geosystems of different hierarchical level, the application of scientific methods and making of adequate number of analytical, synthetic

  17. Renewable Energy Atlas of the United States

    Energy Technology Data Exchange (ETDEWEB)

    Kuiper, J. [Environmental Science Division; Hlava, K. [Environmental Science Division; Greenwood, H. [Environmentall Science Division; Carr, A. [Environmental Science Division

    2013-12-13

    The Renewable Energy Atlas (Atlas) of the United States is a compilation of geospatial data focused on renewable energy resources, federal land ownership, and base map reference information. This report explains how to add the Atlas to your computer and install the associated software. The report also includes: A description of each of the components of the Atlas; Lists of the Geographic Information System (GIS) database content and sources; and A brief introduction to the major renewable energy technologies. The Atlas includes the following: A GIS database organized as a set of Environmental Systems Research Institute (ESRI) ArcGIS Personal GeoDatabases, and ESRI ArcReader and ArcGIS project files providing an interactive map visualization and analysis interface.

  18. Processing and Quality Monitoring for the ATLAS Tile Hadronic Calorimeter Data

    CERN Document Server

    Burghgrave, Blake; The ATLAS collaboration

    2016-01-01

    We present an overview of Data Processing and Data Quality (DQ) Monitoring for the ATLAS Tile Hadronic Calorimeter. Calibration runs are monitored from a data quality perspective and used as a cross-check for physics runs. Data quality in physics runs is monitored extensively and continuously. Any problems are reported and immediately investigated. The DQ efficiency achieved was 99.6% in 2012 and 100% in 2015, after the detector maintenance in 2013-2014. Changes to detector status or calibrations are entered into the conditions database during a brief calibration loop between when a run ends and bulk processing begins. Bulk processed data is reviewed and certified for the ATLAS Good Run List if no problem is detected. Experts maintain the tools used by DQ shifters and the calibration teams during normal operation, and prepare new conditions for data reprocessing and MC production campaigns. Conditions data are stored in 3 databases: Online DB, Offline DB for data and a special DB for Monte Carlo. Database upd...

  19. ATLAS job monitoring in the Dashboard Framework

    CERN Document Server

    Sargsyan, L; The ATLAS collaboration; Campana, S; Karavakis, E; Kokoszkiewicz, L; Saiz, P; Schovancova, J; Tuckett, D

    2012-01-01

    Monitoring of the large-scale data processing of the ATLAS experiment includes monitoring of production and user analysis jobs. The Experiment Dashboard provides a common job monitoring solution, which is shared by ATLAS and CMS experiments. This includes an accounting portal as well as real-time monitoring. Dashboard job monitoring for ATLAS combines information from PanDA job processing database, Production system database and monitoring information from jobs submitted through GANGA to Workload Management System (WMS) or local batch systems. Usage of Dashboard-based job monitoring applications will decrease load on the PanDA database and overcome scale limitations in PanDA monitoring caused by the short job rotation cycle in the PanDA database. Aggregation of the task/job metrics from different sources provides complete view of job processing activity in ATLAS scope.

  20. ATLAS job monitoring in the Dashboard Framework

    International Nuclear Information System (INIS)

    Andreeva, J; Campana, S; Karavakis, E; Kokoszkiewicz, L; Saiz, P; Tuckett, D; Sargsyan, L; Schovancova, J

    2012-01-01

    Monitoring of the large-scale data processing of the ATLAS experiment includes monitoring of production and user analysis jobs. The Experiment Dashboard provides a common job monitoring solution, which is shared by ATLAS and CMS experiments. This includes an accounting portal as well as real-time monitoring. Dashboard job monitoring for ATLAS combines information from the PanDA job processing database, Production system database and monitoring information from jobs submitted through GANGA to Workload Management System (WMS) or local batch systems. Usage of Dashboard-based job monitoring applications will decrease load on the PanDA database and overcome scale limitations in PanDA monitoring caused by the short job rotation cycle in the PanDA database. Aggregation of the task/job metrics from different sources provides complete view of job processing activity in ATLAS scope.

  1. Encoding atlases by randomized classification forests for efficient multi-atlas label propagation.

    Science.gov (United States)

    Zikic, D; Glocker, B; Criminisi, A

    2014-12-01

    We propose a method for multi-atlas label propagation (MALP) based on encoding the individual atlases by randomized classification forests. Most current approaches perform a non-linear registration between all atlases and the target image, followed by a sophisticated fusion scheme. While these approaches can achieve high accuracy, in general they do so at high computational cost. This might negatively affect the scalability to large databases and experimentation. To tackle this issue, we propose to use a small and deep classification forest to encode each atlas individually in reference to an aligned probabilistic atlas, resulting in an Atlas Forest (AF). Our classifier-based encoding differs from current MALP approaches, which represent each point in the atlas either directly as a single image/label value pair, or by a set of corresponding patches. At test time, each AF produces one probabilistic label estimate, and their fusion is done by averaging. Our scheme performs only one registration per target image, achieves good results with a simple fusion scheme, and allows for efficient experimentation. In contrast to standard forest schemes, in which each tree would be trained on all atlases, our approach retains the advantages of the standard MALP framework. The target-specific selection of atlases remains possible, and incorporation of new scans is straightforward without retraining. The evaluation on four different databases shows accuracy within the range of the state of the art at a significantly lower running time. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. The Next Generation ATLAS Production System

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; Golubkov, Dmitry; Klimentov, Alexei; Maeno, Tadashi; Mashinistov, Ruslan; Vaniachine, Alexandre

    2015-01-01

    The ATLAS experiment at LHC data processing and simulation grows continuously, as more data and more use cases emerge. For data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, dynamically submitted by the ATLAS workload management system (PanDA/JEDI) and executed on the Grid, clouds and supercomputers. Patterns in ATLAS data transformation workflows composed of many tasks provided a scalable production system framework for template definitions of the many-tasks workflows. User interface and system logic of these workflows are being implemented in the Database Engine for Tasks (DEFT). Such development required using modern computing technologies and approaches. We report technical details of this development: database implementation, server logic and Web user interface technologies.

  3. Atlas – a data warehouse for integrative bioinformatics

    Directory of Open Access Journals (Sweden)

    Yuen Macaire MS

    2005-02-01

    Full Text Available Abstract Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL calls that are implemented in a set of Application Programming Interfaces (APIs. The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD, Biomolecular Interaction Network Database (BIND, Database of Interacting Proteins (DIP, Molecular Interactions Database (MINT, IntAct, NCBI Taxonomy, Gene Ontology (GO, Online Mendelian Inheritance in Man (OMIM, LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First

  4. Processing and Quality Monitoring for the ATLAS Tile Hadronic Calorimeter Data

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00354209; The ATLAS collaboration

    2017-01-01

    An overview is presented of Data Processing and Data Quality (DQ) Monitoring for the ATLAS Tile Hadronic Calorimeter. Calibration runs are monitored from a data quality perspective and used as a cross-check for physics runs. Data quality in physics runs is monitored extensively and continuously. Any problems are reported and immediately investigated. The DQ efficiency achieved was 99.6% in 2012 and 100% in 2015, after the detector maintenance in 2013-2014. Changes to detector status or calibrations are entered into the conditions database (DB) during a brief calibration loop between the end of a run and the beginning of bulk processing of data collected in it. Bulk processed data are reviewed and certified for the ATLAS Good Run List if no problem is detected. Experts maintain the tools used by DQ shifters and the calibration teams during normal operation, and prepare new conditions for data reprocessing and Monte Carlo (MC) production campaigns. Conditions data are stored in 3 databases: Online DB, Offline D...

  5. Mindboggle: Automated brain labeling with multiple atlases

    International Nuclear Information System (INIS)

    Klein, Arno; Mensh, Brett; Ghosh, Satrajit; Tourville, Jason; Hirsch, Joy

    2005-01-01

    To make inferences about brain structures or activity across multiple individuals, one first needs to determine the structural correspondences across their image data. We have recently developed Mindboggle as a fully automated, feature-matching approach to assign anatomical labels to cortical structures and activity in human brain MRI data. Label assignment is based on structural correspondences between labeled atlases and unlabeled image data, where an atlas consists of a set of labels manually assigned to a single brain image. In the present work, we study the influence of using variable numbers of individual atlases to nonlinearly label human brain image data. Each brain image voxel of each of 20 human subjects is assigned a label by each of the remaining 19 atlases using Mindboggle. The most common label is selected and is given a confidence rating based on the number of atlases that assigned that label. The automatically assigned labels for each subject brain are compared with the manual labels for that subject (its atlas). Unlike recent approaches that transform subject data to a labeled, probabilistic atlas space (constructed from a database of atlases), Mindboggle labels a subject by each atlas in a database independently. When Mindboggle labels a human subject's brain image with at least four atlases, the resulting label agreement with coregistered manual labels is significantly higher than when only a single atlas is used. Different numbers of atlases provide significantly higher label agreements for individual brain regions. Increasing the number of reference brains used to automatically label a human subject brain improves labeling accuracy with respect to manually assigned labels. Mindboggle software can provide confidence measures for labels based on probabilistic assignment of labels and could be applied to large databases of brain images

  6. NoSQL technologies for the CMS Conditions Database

    Science.gov (United States)

    Sipos, Roland

    2015-12-01

    With the restart of the LHC in 2015, the growth of the CMS Conditions dataset will continue, therefore the need of consistent and highly available access to the Conditions makes a great cause to revisit different aspects of the current data storage solutions. We present a study of alternative data storage backends for the Conditions Databases, by evaluating some of the most popular NoSQL databases to support a key-value representation of the CMS Conditions. The definition of the database infrastructure is based on the need of storing the conditions as BLOBs. Because of this, each condition can reach the size that may require special treatment (splitting) in these NoSQL databases. As big binary objects may be problematic in several database systems, and also to give an accurate baseline, a testing framework extension was implemented to measure the characteristics of the handling of arbitrary binary data in these databases. Based on the evaluation, prototypes of a document store, using a column-oriented and plain key-value store, are deployed. An adaption layer to access the backends in the CMS Offline software was developed to provide transparent support for these NoSQL databases in the CMS context. Additional data modelling approaches and considerations in the software layer, deployment and automatization of the databases are also covered in the research. In this paper we present the results of the evaluation as well as a performance comparison of the prototypes studied.

  7. Processing and Quality Monitoring for the ATLAS Tile Hadronic Calorimeter Data

    Science.gov (United States)

    Burghgrave, Blake; ATLAS Collaboration

    2017-10-01

    An overview is presented of Data Processing and Data Quality (DQ) Monitoring for the ATLAS Tile Hadronic Calorimeter. Calibration runs are monitored from a data quality perspective and used as a cross-check for physics runs. Data quality in physics runs is monitored extensively and continuously. Any problems are reported and immediately investigated. The DQ efficiency achieved was 99.6% in 2012 and 100% in 2015, after the detector maintenance in 2013-2014. Changes to detector status or calibrations are entered into the conditions database (DB) during a brief calibration loop between the end of a run and the beginning of bulk processing of data collected in it. Bulk processed data are reviewed and certified for the ATLAS Good Run List if no problem is detected. Experts maintain the tools used by DQ shifters and the calibration teams during normal operation, and prepare new conditions for data reprocessing and Monte Carlo (MC) production campaigns. Conditions data are stored in 3 databases: Online DB, Offline DB for data and a special DB for Monte Carlo. Database updates can be performed through a custom-made web interface.

  8. Evolution of grid-wide access to database resident information in ATLAS using Frontier

    CERN Document Server

    Barberis, D; The ATLAS collaboration; de Stefano, J; Dewhurst, A L; Dykstra, D; Front, D

    2012-01-01

    The ATLAS experiment deployed Frontier technology world-wide during the the initial year of LHC collision data taking to enable user analysis jobs running on the World-wide LHC Computing Grid to access database resident data. Since that time, the deployment model has evolved to optimize resources, improve performance, and streamline maintenance of Frontier and related infrastructure. In this presentation we focus on the specific changes in the deployment and improvements undertaken such as the optimization of cache and launchpad location, the use of RPMs for more uniform deployment of underlying Frontier related components, improvements in monitoring, optimization of fail-over, and an increasing use of a centrally managed database containing site specific information (for configuration of services and monitoring). In addition, analysis of Frontier logs has allowed us a deeper understanding of problematic queries and understanding of use cases. Use of the system has grown beyond just user analysis and subsyste...

  9. NoSQL technologies for the CMS Conditions Database

    CERN Document Server

    Sipos, Roland

    2015-01-01

    With the restart of the LHC in 2015, the growth of the CMS Conditions dataset will continue, therefore the need of consistent and highly available access to the Conditions makes a great cause to revisit different aspects of the current data storage solutions.We present a study of alternative data storage backends for the Conditions Databases, by evaluating some of the most popular NoSQL databases to support a key-value representation of the CMS Conditions. An important detail about the Conditions that the payloads are stored as BLOBs, and they can reach sizes that may require special treatment (splitting) in these NoSQL databases. As big binary objects may be a bottleneck in several database systems, and also to give an accurate baseline, a testing framework extension was implemented to measure the characteristics of the handling of arbitrary binary data in these databases. Based on the evaluation, prototypes of a document store, using a column-oriented and plain key-value store, are deployed. An adaption l...

  10. Metadata Aided Run Selection at ATLAS

    CERN Document Server

    Buckingham, RM; The ATLAS collaboration; Tseng, JC-L; Viegas, F; Vinek, E

    2010-01-01

    Management of the large volume of data collected by any large scale sci- entific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user in- terfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called “runBrowser” makes these Conditions Metadata available as a Run based selection service. runBrowser, based on php and javascript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions at...

  11. Metadata aided run selection at ATLAS

    CERN Document Server

    Buckingham, RM; The ATLAS collaboration; Tseng, JC-L; Viegas, F; Vinek, E

    2011-01-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called “runBrowser” makes these Conditions Metadata available as a Run based selection service. runBrowser, based on php and javascript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attrib...

  12. A system for managing information at ATLAS

    International Nuclear Information System (INIS)

    Tilbrook, I.R.

    1993-01-01

    In response to a need for better management of maintenance and document information at the Argonne Tandem-Linear Accelerating System (ATLAS), the ATLAS Information Management System (AIMS) has been created. The system is based on the relational database model. The system's applications use the Alpha-4 relational database management system, a commercially available software package. The system's function and design are described

  13. New Persistent Back-End for the ATLAS Online Information Service

    CERN Document Server

    Soloviev, I; The ATLAS collaboration

    2014-01-01

    The Trigger and Data Acquisition (TDAQ) and detector systems of the ATLAS experiment deploy more than 3000 computers, running more than 15000 concurrent processes, to perform the selection, recording and monitoring of the proton collisions data in ATLAS. Most of these processes produce and share operational monitoring data used for inter-process communication and analysis of the systems. Few of these data are archived by dedicated applications into conditions and histogram databases. The rest of the data remained transient and lost at the end of a data taking session. To save these data for later, offline analysis of the quality of data taking and to help investigating the behavior of the system by experts, the first prototype of a new Persistent Back-End for the Atlas Information System of TDAQ (P-BEAST) was developed and deployed in the second half of 2012. The modern, distributed, and Java-based Cassandra database has been used as the storage technology and the CERN EOS for long-term storage. This paper pr...

  14. Preparation of Northern Mid-Continent Petroleum Atlas

    Energy Technology Data Exchange (ETDEWEB)

    Lee C. Gerhard; Timothy R. Carr; W. Lynn Watney

    1998-05-01

    As proposed, the third year program will continue and expand upon the Kansas elements of the original program, and provide improved on-line access to the prototype atlas. The third year of the program will result in a digital atlas sufficient to provide a permanent improvement in data access to Kansas operators. The ultimate goal of providing an interactive history-matching interface with a regional database will be demonstrated as the program covers more geographic territory and the database expands. The atlas will expand to include significant reservoirs representing the major plays in Kansas, and North Dakota. Primary products of the third year prototype atlas will be on-line accessible digital databases and technical publications covering two additional petroleum plays in Kansas and one in North Dakota. Regional databases will be supplemented with geological field studies of selected fields in each play. Digital imagery, digital mapping, relational data queries, and geographical information systems will be integral to the field studies and regional data sets. Data sets will have relational links to provide opportunity for history-matching, feasibility, and risk analysis tests on contemplated exploration and development projects. The flexible "web-like" design of the atlas provides ready access to data, and technology at a variety of scales from regional, to field, to lease, and finally to the individual well bore. The digital structure of the atlas permits the operator to access comprehensive reservoir data and customize the interpretative products (e.g., maps and cross-sections) to their needs. The atlas will be accessible in digital form on-line using a World-Wide-Web browser as the graphical user interface. Regional data sets and field studies will be freestanding entities that will be made available on-line through the Internet to users as they are completed. Technology transfer activities will be ongoing from the earliest part of this project, providing

  15. Building a scalable event-level metadata service for ATLAS

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Goosens, L; Viegas, F T A; McGlone, H

    2008-01-01

    The ATLAS TAG Database is a multi-terabyte event-level metadata selection system, intended to allow discovery, selection of and navigation to events of interest to an analysis. The TAG Database encompasses file- and relational-database-resident event-level metadata, distributed across all ATLAS Tiers. An oracle hosted global TAG relational database, containing all ATLAS events, implemented in Oracle, will exist at Tier O. Implementing a system that is both performant and manageable at this scale is a challenge. A 1 TB relational TAG Database has been deployed at Tier 0 using simulated tag data. The database contains one billion events, each described by two hundred event metadata attributes, and is currently undergoing extensive testing in terms of queries, population and manageability. These 1 TB tests aim to demonstrate and optimise the performance and scalability of an Oracle TAG Database on a global scale. Partitioning and indexing strategies are crucial to well-performing queries and manageability of the database and have implications for database population and distribution, so these are investigated. Physics query patterns are anticipated, but a crucial feature of the system must be to support a broad range of queries across all attributes. Concurrently, event tags from ATLAS Computing System Commissioning distributed simulations are accumulated in an Oracle-hosted database at CERN, providing an event-level selection service valuable for user experience and gathering information about physics query patterns. In this paper we describe the status of the Global TAG relational database scalability work and highlight areas of future direction

  16. Migration of ATLAS PanDA to CERN

    International Nuclear Information System (INIS)

    Stewart, Graeme Andrew; Klimentov, Alexei; Maeno, Tadashi; Nevski, Pavel; Nowak, Marcin; De Castro Faria Salgado, Pedro Emanuel; Wenaus, Torre; Koblitz, Birger; Lamanna, Massimo

    2010-01-01

    The ATLAS Production and Distributed Analysis System (PanDA) is a key component of the ATLAS distributed computing infrastructure. All ATLAS production jobs, and a substantial amount of user and group analysis jobs, pass through the PanDA system, which manages their execution on the grid. PanDA also plays a key role in production task definition and the data set replication request system. PanDA has recently been migrated from Brookhaven National Laboratory (BNL) to the European Organization for Nuclear Research (CERN), a process we describe here. We discuss how the new infrastructure for PanDA, which relies heavily on services provided by CERN IT, was introduced in order to make the service as reliable as possible and to allow it to be scaled to ATLAS's increasing need for distributed computing. The migration involved changing the backend database for PanDA from MySQL to Oracle, which impacted upon the database schemas. The process by which the client code was optimised for the new database backend is discussed. We describe the procedure by which the new database infrastructure was tested and commissioned for production use. Operations during the migration had to be planned carefully to minimise disruption to ongoing ATLAS offline computing. All parts of the migration were fully tested before commissioning the new infrastructure and the gradual migration of computing resources to the new system allowed any problems of scaling to be addressed.

  17. LHCb Conditions Database Operation Assistance Systems

    CERN Multimedia

    Shapoval, Illya

    2012-01-01

    The Conditions Database of the LHCb experiment (CondDB) provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues: - an extension to the automatic content validation done by the “Oracle Streams” replication technology, to trap cases when the replication was unsuccessful; - an automated distribution process for the S...

  18. Commissioning and first operation of the pCVD diamond ATLAS Beam Conditions Monitor

    CERN Document Server

    Dobos, D

    2009-01-01

    The main aim of the ATLAS Beam Conditions Monitor is to protect the ATLAS Inner Detector silicon trackers from high radiation doses caused by LHC beam incidents, e.g. magnet failures. The BCM uses in total 16 1x1 cm2 500 μm thick polycrystalline chemical vapor deposition (pCVD) diamond sensors. They are arranged in 8 positions around the ATLAS LHC interaction point. Time difference measurements with sub nanosecond resolution are performed to distinguish between particles from a collision and spray particles from a beam incident. An abundance of the latter leads the BCM to provoke an abort of the LHC beam. A FPGA based readout system with a sampling rate of 2.56 GHz performs the online data analysis and interfaces the results to ATLAS and the beam abort system. The BCM diamond sensors, the detector modules and their readout system are described. Results of the operation with the first LHC beams are reported and results of commissioning and timing measurements (e.g. with cosmic muons) in preparation for first ...

  19. Cassini Tour Atlas Automated Generation

    Science.gov (United States)

    Grazier, Kevin R.; Roumeliotis, Chris; Lange, Robert D.

    2011-01-01

    During the Cassini spacecraft s cruise phase and nominal mission, the Cassini Science Planning Team developed and maintained an online database of geometric and timing information called the Cassini Tour Atlas. The Tour Atlas consisted of several hundreds of megabytes of EVENTS mission planning software outputs, tables, plots, and images used by mission scientists for observation planning. Each time the nominal mission trajectory was altered or tweaked, a new Tour Atlas had to be regenerated manually. In the early phases of Cassini s Equinox Mission planning, an a priori estimate suggested that mission tour designers would develop approximately 30 candidate tours within a short period of time. So that Cassini scientists could properly analyze the science opportunities in each candidate tour quickly and thoroughly so that the optimal series of orbits for science return could be selected, a separate Tour Atlas was required for each trajectory. The task of manually generating the number of trajectory analyses in the allotted time would have been impossible, so the entire task was automated using code written in five different programming languages. This software automates the generation of the Cassini Tour Atlas database. It performs with one UNIX command what previously took a day or two of human labor.

  20. Experience with the Open Source based implementation for ATLAS Conditions Data Management System

    CERN Document Server

    Amorim, A; Oliveira, C; Pedro, L; Barros, N

    2003-01-01

    Conditions Data in high energy physics experiments is frequently seen as every data needed for reconstruction besides the event data itself. This includes all sorts of slowly evolving data like detector alignment, calibration and robustness, and data from detector control system. Also, every Conditions Data Object is associated with a time interval of validity and a version. Besides that, quite often is useful to tag collections of Conditions Data Objects altogether. These issues have already been investigated and a data model has been proposed and used for different implementations based in commercial DBMSs, both at CERN and for the BaBar experiment. The special case of the ATLAS complex trigger that requires online access to calibration and alignment data poses new challenges that have to be met using a flexible and customizable solution more in the line of Open Source components. Motivated by the ATLAS challenges we have developed an alternative implementation, based in an Open Source RDBMS. Several issues...

  1. Glance project: a database retrieval mechanism for the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Maidantchik, C [COPPE, UFRJ (Brazil); Grael, F F; Galvao, K K [Escola Politecnica, UFRJ (Brazil); Pommes, K [CERN (Switzerland)

    2008-07-15

    During the construction and commissioning phases of the ATLAS detector, data related to the installation, placement, testing and performance of the equipment are stored in relational databases. Each group acquires and saves information in different servers, using diverse technologies, data modeling and terminologies. Installation and maintenance during the experiment construction and operation depends on the access to this information, as well as imply in its update. The development of retrieval and update systems for each data set requires too much effort and high maintenance cost. The Glance system retrieves and inserts/updates data independently of the modeling and technology used for the storage, recognizes the repositories internal structure and guides the user through the creation of search and insertion interfaces. Distinct and spread data sets can be transparently integrated in one interface. Data can be exported/imported to/from various formats. The system handles many independent interfaces, which can be accessed by users or other applications at any time. This paper describes the Glance conception, its development and features. The system usage is illustrated with examples. Current status and future work are also discussed.

  2. A study of the application of Brain Atlas with and without +Gz acceleration conditions.

    Science.gov (United States)

    Li, Yifeng; Zhang, Lihui; Zhang, Tao; Li, Baohui

    2017-07-20

    The purposes of this study were to utilize Brain Atlas to investigate the fluctuations in the characteristics of human EEG, with and without +Gz acceleration produced by human centrifuge, and also to examine the G load endurance of human body. The Brain Atlas of the EEG signal with and without +Gz acceleration in a static state were compared in order to reveal the correlation and differences. When compared with those in a static state, it was found that for the EEG readings of the subjects undergoing +Gz acceleration conditions, the energy and gray scale values of the low-frequency component-delta rhythm showed significant increases, while the energy and gray scale values of the high-frequency component-beta rhythm showed significant decreases. Among these, the beta2 rhythm was determined to be significantly inhibited. These fluctuations suggested that the ischemia conditions of brain had been improved. Also, the recoveries in the energy and gray-scale values were determined to be faster, which suggested that the G load endurance of human body had been enhanced. The Brain Atlas was found to show observable changes in color. The experimental results indicated that the Brain Atlas was able to provide assistance during the exploration of the fluctuations in the characteristics of EEG, and provided a criterion to assist in the observations of the function state fluctuations of human brain with +Gz acceleration. It also assisted in the evaluations of the G load endurance of human body.

  3. The ATLAS beam conditions monitor

    CERN Document Server

    Mikuz, M; Dolenc, I; Kagan, H; Kramberger, G; Frais-Kölbl, H; Gorisek, A; Griesmayer, E; Mandic, I; Pernegger, H; Trischuk, W; Weilhammer, P; Zavrtanik, M

    2006-01-01

    The ATLAS beam conditions monitor is being developed as a stand-alone device allowing to separate LHC collisions from background events induced either on beam gas or by beam accidents, for example scraping at the collimators upstream the spectrometer. This separation can be achieved by timing coincidences between two stations placed symmetric around the interaction point. The 25 ns repetition of collisions poses very stringent requirements on the timing resolution. The optimum separation between collision and background events is just 12.5 ns implying a distance of 3.8 m between the two stations. 3 ns wide pulses are required with 1 ns rise time and baseline restoration in 10 ns. Combined with the radiation field of 10/sup 15/ cm/sup -2/ in 10 years of LHC operation only diamond detectors are considered suitable for this task. pCVD diamond pad detectors of 1 cm/sup 2/ and around 500 mum thickness were assembled with a two-stage RF current amplifier and tested in proton beam at MGH, Boston and SPS pion beam at...

  4. Design and use of numerical anatomical atlases for radiotherapy; Creation et utilisation d'atlas anatomiques numeriques pour la radiotherapie

    Energy Technology Data Exchange (ETDEWEB)

    Commowick, O

    2007-02-15

    The main objective of this thesis is to provide radio-oncology specialists with automatic tools for delineating organs at risk of a patient undergoing a radiotherapy treatment of cerebral or head and neck tumors. To achieve this goal, we use an anatomical atlas, i.e. a representative anatomy associated to a clinical image representing it. The registration of this atlas allows us to segment automatically the patient structures and to accelerate this process. Contributions in this method are presented on three axes. First, we want to obtain a registration method which is as independent as possible from the setting of its parameters. This setting, done by the clinician, indeed needs to be minimal while guaranteeing a robust result. We therefore propose registration methods allowing a better control of the obtained transformation, using rejection techniques of inadequate matching or locally affine transformations. The second axis is dedicated to the consideration of structures associated with the presence of the tumor. These structures, not present in the atlas, indeed lead to local errors in the atlas-based segmentation. We therefore propose methods to delineate these structures and take them into account in the registration. Finally, we present the construction of an anatomical atlas of the head and neck region and its evaluation on a database of patients. We show in this part the feasibility of the use of an atlas for this region, as well as a simple method to evaluate the registration methods used to build an atlas. All this research work has been implemented in a commercial software (Imago from DOSIsoft), allowing us to validate our results in clinical conditions. (author)

  5. Neuroinformatics of the Allen Mouse Brain Connectivity Atlas.

    Science.gov (United States)

    Kuan, Leonard; Li, Yang; Lau, Chris; Feng, David; Bernard, Amy; Sunkin, Susan M; Zeng, Hongkui; Dang, Chinh; Hawrylycz, Michael; Ng, Lydia

    2015-02-01

    The Allen Mouse Brain Connectivity Atlas is a mesoscale whole brain axonal projection atlas of the C57Bl/6J mouse brain. Anatomical trajectories throughout the brain were mapped into a common 3D space using a standardized platform to generate a comprehensive and quantitative database of inter-areal and cell-type-specific projections. This connectivity atlas has several desirable features, including brain-wide coverage, validated and versatile experimental techniques, a single standardized data format, a quantifiable and integrated neuroinformatics resource, and an open-access public online database (http://connectivity.brain-map.org/). Meaningful informatics data quantification and comparison is key to effective use and interpretation of connectome data. This relies on successful definition of a high fidelity atlas template and framework, mapping precision of raw data sets into the 3D reference framework, accurate signal detection and quantitative connection strength algorithms, and effective presentation in an integrated online application. Here we describe key informatics pipeline steps in the creation of the Allen Mouse Brain Connectivity Atlas and include basic application use cases. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Collecting conditions usage metadata to optimize current and future ATLAS software and processing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00064378; The ATLAS collaboration; Formica, Andrea; Gallas, Elizabeth; Oda, Susumu; Rinaldi, Lorenzo; Rybkin, Grigori; Verducci, Monica

    2017-01-01

    Conditions data (for example: alignment, calibration, data quality) are used extensively in the processing of real and simulated data in ATLAS. The volume and variety of the conditions data needed by different types of processing are quite diverse, so optimizing its access requires a careful understanding of conditions usage patterns. These patterns can be quantified by mining representative log files from each type of processing and gathering detailed information about conditions usage for that type of processing into a central repository.

  7. Atlas-based delineation of lymph node levels in head and neck computed tomography images

    International Nuclear Information System (INIS)

    Commowick, Olivier; Gregoire, Vincent; Malandain, Gregoire

    2008-01-01

    Purpose: Radiotherapy planning requires accurate delineations of the tumor and of the critical structures. Atlas-based segmentation has been shown to be very efficient to automatically delineate brain critical structures. We therefore propose to construct an anatomical atlas of the head and neck region. Methods and materials: Due to the high anatomical variability of this region, an atlas built from a single image as for the brain is not adequate. We address this issue by building a symmetric atlas from a database of manually segmented images. First, we develop an atlas construction method and apply it to a database of 45 Computed Tomography (CT) images from patients with node-negative pharyngo-laryngeal squamous cell carcinoma manually delineated for radiotherapy. Then, we qualitatively and quantitatively evaluate the results generated by the built atlas based on Leave-One-Out framework on the database. Results: We present qualitative and quantitative results using this atlas construction method. The evaluation was performed on a subset of 12 patients among the original CT database of 45 patients. Qualitative results depict visually well delineated structures. The quantitative results are also good, with an error with respect to the best achievable results ranging from 0.196 to 0.404 with a mean of 0.253. Conclusions: These results show the feasibility of using such an atlas for radiotherapy planning. Many perspectives are raised from this work ranging from extensive validation to the construction of several atlases representing sub-populations, to account for large inter-patient variabilities, and populations with node-positive tumors

  8. Design and use of numerical anatomical atlases for radiotherapy; Creation et utilisation d'atlas anatomiques numeriques pour la radiotherapie

    Energy Technology Data Exchange (ETDEWEB)

    Commowick, O

    2007-02-15

    The main objective of this thesis is to provide radio-oncology specialists with automatic tools for delineating organs at risk of a patient undergoing a radiotherapy treatment of cerebral or head and neck tumors. To achieve this goal, we use an anatomical atlas, i.e. a representative anatomy associated to a clinical image representing it. The registration of this atlas allows us to segment automatically the patient structures and to accelerate this process. Contributions in this method are presented on three axes. First, we want to obtain a registration method which is as independent as possible from the setting of its parameters. This setting, done by the clinician, indeed needs to be minimal while guaranteeing a robust result. We therefore propose registration methods allowing a better control of the obtained transformation, using rejection techniques of inadequate matching or locally affine transformations. The second axis is dedicated to the consideration of structures associated with the presence of the tumor. These structures, not present in the atlas, indeed lead to local errors in the atlas-based segmentation. We therefore propose methods to delineate these structures and take them into account in the registration. Finally, we present the construction of an anatomical atlas of the head and neck region and its evaluation on a database of patients. We show in this part the feasibility of the use of an atlas for this region, as well as a simple method to evaluate the registration methods used to build an atlas. All this research work has been implemented in a commercial software (Imago from DOSIsoft), allowing us to validate our results in clinical conditions. (author)

  9. The ATLAS Eventlndex: data flow and inclusion of other metadata

    Science.gov (United States)

    Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration

    2016-10-01

    The ATLAS EventIndex is the catalogue of the event-related metadata for the information collected from the ATLAS detector. The basic unit of this information is the event record, containing the event identification parameters, pointers to the files containing this event as well as trigger decision information. The main use case for the EventIndex is event picking, as well as data consistency checks for large production campaigns. The EventIndex employs the Hadoop platform for data storage and handling, as well as a messaging system for the collection of information. The information for the EventIndex is collected both at Tier-0, when the data are first produced, and from the Grid, when various types of derived data are produced. The EventIndex uses various types of auxiliary information from other ATLAS sources for data collection and processing: trigger tables from the condition metadata database (COMA), dataset information from the data catalogue AMI and the Rucio data management system and information on production jobs from the ATLAS production system. The ATLAS production system is also used for the collection of event information from the Grid jobs. EventIndex developments started in 2012 and in the middle of 2015 the system was commissioned and started collecting event metadata, as a part of ATLAS Distributed Computing operations.

  10. Second ATLAS Domestic Standard Problem (DSP-02) For A Code Assessment

    International Nuclear Information System (INIS)

    Kim, Yeonsik; Choi, Kiyong; Cho, Seok; Park, Hyunsik; Kang, Kyungho; Song, Chulhwa; Baek, Wonpil

    2013-01-01

    KAERI (Korea Atomic Energy Research Institute) has been operating an integral effect test facility, the Advanced Thermal-Hydraulic Test Loop for Accident Simulation (ATLAS), for transient and accident simulations of advanced pressurized water reactors (PWRs). Using ATLAS, a high-quality integral effect test database has been established for major design basis accidents of the APR1400 plant. A Domestic Standard Problem (DSP) exercise using the ATLAS database was promoted to transfer the database to domestic nuclear industries and contribute to improving a safety analysis methodology for PWRs. This 2 nd ATLAS DSP (DSP-02) exercise aims at an effective utilization of an integral effect database obtained from ATLAS, the establishment of a cooperation framework among the domestic nuclear industry, a better understanding of the thermal hydraulic phenomena, and an investigation into the possible limitation of the existing best-estimate safety analysis codes. A small break loss of coolant accident with a 6-inch break at the cold leg was determined as a target scenario by considering its technical importance and by incorporating interests from participants. This DSP exercise was performed in an open calculation environment where the integral effect test data was open to participants prior to the code calculations. This paper includes major information of the DSP-02 exercise as well as comparison results between the calculations and the experimental data

  11. SECOND ATLAS DOMESTIC STANDARD PROBLEM (DSP-02 FOR A CODE ASSESSMENT

    Directory of Open Access Journals (Sweden)

    YEON-SIK KIM

    2013-12-01

    Full Text Available KAERI (Korea Atomic Energy Research Institute has been operating an integral effect test facility, the Advanced Thermal-Hydraulic Test Loop for Accident Simulation (ATLAS, for transient and accident simulations of advanced pressurized water reactors (PWRs. Using ATLAS, a high-quality integral effect test database has been established for major design basis accidents of the APR1400 plant. A Domestic Standard Problem (DSP exercise using the ATLAS database was promoted to transfer the database to domestic nuclear industries and contribute to improving a safety analysis methodology for PWRs. This 2nd ATLAS DSP (DSP-02 exercise aims at an effective utilization of an integral effect database obtained from ATLAS, the establishment of a cooperation framework among the domestic nuclear industry, a better understanding of the thermal hydraulic phenomena, and an investigation into the possible limitation of the existing best-estimate safety analysis codes. A small break loss of coolant accident with a 6-inch break at the cold leg was determined as a target scenario by considering its technical importance and by incorporating interests from participants. This DSP exercise was performed in an open calculation environment where the integral effect test data was open to participants prior to the code calculations. This paper includes major information of the DSP-02 exercise as well as comparison results between the calculations and the experimental data.

  12. The Monitoring and Calibration Web Systems for the ATLAS Tile Calorimeter Data Quality Analysis

    International Nuclear Information System (INIS)

    Sivolella, A; Maidantchik, C; Ferreira, F

    2012-01-01

    The Tile Calorimeter (TileCal) is one of the ATLAS sub-detectors. The read-out is performed by about 10,000 PhotoMultiplier Tubes (PMTs). The signal of each PMT is digitized by an electronic channel. The Monitoring and Calibration Web System (MCWS) supports the data quality analysis of the electronic channels. This application was developed to assess the detector status and verify its performance. It can provide to the user the list of TileCal known problematic channels, that is stored in the ATLAS condition database (COOL DB). The bad channels list guides the data quality validator in identifying new problematic channels and is used in data reconstruction and the system allows to update the channels list directly in the COOL database. MCWS can generate summary results, such as eta-phi plots and comparative tables of the masked channels percentage. Regularly, during the LHC (Large Hadron Collider) shutdown a maintenance of the detector equipments is performed. When a channel is repaired, its calibration constants stored in the COOL database have to be updated. Additionally MCWS system manages the update of these calibration constants values in the COOL database. The MCWS has been used by the Tile community since 2008, during the commissioning phase, and was upgraded to comply with ATLAS operation specifications. Among its future developments, it is foreseen an integration of MCWS with the TileCal control Web system (DCS) in order to identify high voltage problems automatically.

  13. Atlas of Vega: 3850-6860 Å

    Science.gov (United States)

    Kim, Hyun-Sook; Han, Inwoo; Valyavin, G.; Lee, Byeong-Cheol; Shimansky, V.; Galazutdinov, G. A.

    2009-10-01

    We present a high resolving power (λ/Δλ = 90,000) and high signal-to-noise ratio (˜700) spectral atlas of Vega covering the 3850-6860 Å wavelength range. The atlas is a result of averaging of spectra recorded with the aid of the echelle spectrograph BOES fed by the 1.8 m telescope at Bohyunsan Observatory (Korea). The atlas is provided only in machine-readable form (electronic data file) and will be available in the SIMBAD database upon publication. Based on data collected with the 1.8 m telescope operated at BOAO Observatory, Korea.

  14. Comparison report of open calculations for ATLAS Domestic Standard Problem (DSP 02)

    International Nuclear Information System (INIS)

    Choi, Ki Yong; Kim, Y. S.; Kang, K. H.; Cho, S.; Park, H. S.; Choi, N. H.; Kim, B. D.; Min, K. H.; Park, J. K.; Chun, H. G.; Yu, Xin Guo; Kim, H. T.; Song, C. H.; Sim, S. K.; Jeon, S. S.; Kim, S. Y.; Kang, D. G.; Choi, T. S.; Kim, Y. M.; Lim, S. G.; Kim, H. S.; Kang, D. H.; Lee, G. H.; Jang, M. J.

    2012-09-01

    KAERI (Korea Atomic Energy Research Institute) has been operating an integral effect test facility, the Advanced Thermal Hydraulic Test Loop for Accident Simulation (ATLAS) for transient and accident simulations of advanced pressurized water reactors (PWRs). By using the ATLAS, a high quality integral effect test database has been established for major design basis accidents of the APR1400. A Domestic Standard Problem (DSP) exercise using the ATLAS database was promoted in order to transfer the database to domestic nuclear industries and to contribute to improving safety analysis methodology for PWRs. This 2nd ATLAS DSP exercise was led by KAERI in collaboration with KINS since the successful completion of the 1st ATLAS DSP in 2009. This exercise aims at effective utilization of integral effect database obtained from the ATLAS, establishment of cooperation framework among the domestic nuclear industry, better understanding of thermal hydraulic phenomena, and investigation of the possible limitation of the existing best estimate safety analysis codes. A small break loss of coolant accident of 6 inch break at the cold leg was determined as a target scenario by considering its technical importance and by incorporating with interests from participants. Twelve domestic organizations joined this DSP 02 exercise. Finally, eleven out of the joined organizations submitted their calculation results, including universities, government, and nuclear industries. This DSP exercise was performed in an open calculation environment where the integral effect test data was open to participants prior to code calculations. This report includes all information of the 2nd ATLAS DSP (DSP 02) exercise as well as comparison results between the calculations and the experimental data

  15. EnviroAtlas - Ecosystem Service Market and Project Enabling Conditions, U.S., 2016, Forest Trends' Ecosystem Marketplace

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset contains polygons depicting conditions enabling market-based programs, referred to herein as markets, and projects addressing ecosystem...

  16. Trigger Menu-aware Monitoring for the ATLAS experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00441925; The ATLAS collaboration

    2017-01-01

    Changes in the trigger menu, the online algorithmic event-selection of the ATLAS experiment at the LHC, are followed by adjustments to the ATLAS trigger monitoring systems. During Run 1, and so far in Run 2, ATLAS has deployed monitoring updates with the installation of new software releases at Tier-0, the first level of the ATLAS computing grid. Having to wait for a new software release to be installed at Tier-0, in order to update ATLAS offline trigger monitoring configurations, results in a lag with respect to the modification of the trigger menu. We present the design and implementation of a `trigger menu-aware' monitoring system that aims to simplify the ATLAS operational workflows by allowing monitoring configuration changes to be made at the Tier-0 site by utilising an Oracle SQL database.

  17. Diamond pad detector telescope for beam conditions and luminosity monitoring in ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Mikuz, M. [Jozef Stefan Institute and Department of Physics, University of Ljubljana, Ljubljana (Slovenia)], E-mail: Marko.Mikuz@ijs.si; Cindro, V.; Dolenc, I. [Jozef Stefan Institute and Department of Physics, University of Ljubljana, Ljubljana (Slovenia); Frais-Koelbl, H. [University of Applied Sciences Wiener Neustadt and Fotec, Wiener Neustadt (Austria); Gorisek, A. [CERN, Geneva (Switzerland); Griesmayer, E. [University of Applied Sciences Wiener Neustadt and Fotec, Wiener Neustadt (Austria); Kagan, H. [Ohio State University, Columbus (United States); Kramberger, G.; Mandic, I. [Jozef Stefan Institute and Department of Physics, University of Ljubljana, Ljubljana (Slovenia); Niegl, M. [University of Applied Sciences Wiener Neustadt and Fotec, Wiener Neustadt (Austria); Pernegger, H. [CERN, Geneva (Switzerland); Trischuk, W. [University of Toronto, Toronto (Canada); Weilhammer, P. [CERN, Geneva (Switzerland); Zavrtanik, M. [Jozef Stefan Institute and Department of Physics, University of Ljubljana, Ljubljana (Slovenia)

    2007-09-01

    Beam conditions and the potential detector damage resulting from their anomalies have pushed the LHC experiments to plan their own monitoring devices in addition to those provided by the machine. ATLAS decided to build a telescope composed of two stations with four diamond pad detector modules each, placed symmetrically around the interaction point at z={+-}183.8cm and r{approx}55mm ({eta}{approx}4.2). Equipped with fast electronics it allows time-of-flight separation of events resulting from beam anomalies from normally occurring p-p interactions. In addition it will provide a coarse measurement of the LHC luminosity in ATLAS. Ten detector modules have been assembled and subjected to tests, from characterization of bare diamonds to source and beam tests. Preliminary results of beam test in the CERN PS indicate a signal-to-noise ratio of 14{+-}2.

  18. Diamond pad detector telescope for beam conditions and luminosity monitoring in ATLAS

    International Nuclear Information System (INIS)

    Mikuz, M.; Cindro, V.; Dolenc, I.; Frais-Koelbl, H.; Gorisek, A.; Griesmayer, E.; Kagan, H.; Kramberger, G.; Mandic, I.; Niegl, M.; Pernegger, H.; Trischuk, W.; Weilhammer, P.; Zavrtanik, M.

    2007-01-01

    Beam conditions and the potential detector damage resulting from their anomalies have pushed the LHC experiments to plan their own monitoring devices in addition to those provided by the machine. ATLAS decided to build a telescope composed of two stations with four diamond pad detector modules each, placed symmetrically around the interaction point at z=±183.8cm and r∼55mm (η∼4.2). Equipped with fast electronics it allows time-of-flight separation of events resulting from beam anomalies from normally occurring p-p interactions. In addition it will provide a coarse measurement of the LHC luminosity in ATLAS. Ten detector modules have been assembled and subjected to tests, from characterization of bare diamonds to source and beam tests. Preliminary results of beam test in the CERN PS indicate a signal-to-noise ratio of 14±2

  19. The ATLAS EventIndex: data flow and inclusion of other metadata

    CERN Document Server

    Prokoshin, Fedor; The ATLAS collaboration; Cardenas Zarate, Simon Ernesto; Favareto, Andrea; Fernandez Casani, Alvaro; Gallas, Elizabeth; Garcia Montoro, Carlos; Gonzalez de la Hoz, Santiago; Hrivnac, Julius; Malon, David; Salt, Jose; Sanchez, Javier; Toebbicke, Rainer; Yuan, Ruijun

    2016-01-01

    The ATLAS EventIndex is the catalogue of the event-related metadata for the information obtained from the ATLAS detector. The basic unit of this information is event record, containing the event identification parameters, pointers to the files containing this event as well as trigger decision information. The main use case for the EventIndex are the event picking, providing information for the Event Service and data consistency checks for large production campaigns. The EventIndex employs the Hadoop platform for data storage and handling, as well as a messaging system for the collection of information. The information for the EventIndex is collected both at Tier-0, when the data are first produced, and from the GRID, when various types of derived data are produced. The EventIndex uses various types of auxiliary information from other ATLAS sources for data collection and processing: trigger tables from the condition metadata database (COMA), dataset information from the data catalog AMI and the Rucio data man...

  20. The ATLAS EventIndex: data flow and inclusion of other metadata

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00064378; Cardenas Zarate, Simon Ernesto; Favareto, Andrea; Fernandez Casani, Alvaro; Gallas, Elizabeth; Garcia Montoro, Carlos; Gonzalez de la Hoz, Santiago; Hrivnac, Julius; Malon, David; Prokoshin, Fedor; Salt, Jose; Sanchez, Javier; Toebbicke, Rainer; Yuan, Ruijun

    2016-01-01

    The ATLAS EventIndex is the catalogue of the event-related metadata for the information collected from the ATLAS detector. The basic unit of this information is the event record, containing the event identification parameters, pointers to the files containing this event as well as trigger decision information. The main use case for the EventIndex is event picking, as well as data consistency checks for large production campaigns. The EventIndex employs the Hadoop platform for data storage and handling, as well as a messaging system for the collection of information. The information for the EventIndex is collected both at Tier-0, when the data are first produced, and from the Grid, when various types of derived data are produced. The EventIndex uses various types of auxiliary information from other ATLAS sources for data collection and processing: trigger tables from the condition metadata database (COMA), dataset information from the data catalogue AMI and the Rucio data management system and information on p...

  1. Design and use of numerical anatomical atlases for radiotherapy

    International Nuclear Information System (INIS)

    Commowick, O.

    2007-02-01

    The main objective of this thesis is to provide radio-oncology specialists with automatic tools for delineating organs at risk of a patient undergoing a radiotherapy treatment of cerebral or head and neck tumors. To achieve this goal, we use an anatomical atlas, i.e. a representative anatomy associated to a clinical image representing it. The registration of this atlas allows us to segment automatically the patient structures and to accelerate this process. Contributions in this method are presented on three axes. First, we want to obtain a registration method which is as independent as possible from the setting of its parameters. This setting, done by the clinician, indeed needs to be minimal while guaranteeing a robust result. We therefore propose registration methods allowing a better control of the obtained transformation, using rejection techniques of inadequate matching or locally affine transformations. The second axis is dedicated to the consideration of structures associated with the presence of the tumor. These structures, not present in the atlas, indeed lead to local errors in the atlas-based segmentation. We therefore propose methods to delineate these structures and take them into account in the registration. Finally, we present the construction of an anatomical atlas of the head and neck region and its evaluation on a database of patients. We show in this part the feasibility of the use of an atlas for this region, as well as a simple method to evaluate the registration methods used to build an atlas. All this research work has been implemented in a commercial software (Imago from DOSIsoft), allowing us to validate our results in clinical conditions. (author)

  2. Analytics Platform for ATLAS Computing Services

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2016-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Log file data and database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data so as to simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning tools like Spark, Jupyter, R, S...

  3. The Persistification of the ATLAS Geometry

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00068562; The ATLAS collaboration; Bianchi, Riccardo-Maria

    2016-01-01

    The complex geometry of the whole detector of the ATLAS experiment at LHC is currently stored only in custom online databases, from which it is built on-the- y on request. Accessing the online geometry guarantees accessing the latest version of the detector description, but requires the setup of the full ATLAS so ware framework “Athena”, which provides the online services and the tools to retrieve the data from the database. is operation is cumbersome and slows down the applications that need to access the geometry. Moreover, all applications that need to access the detector geom- etry need to be built and run on the same platform as the ATLAS framework, preventing the usage of the actual detector geometry in stand-alone applications. Here we propose a new mechanism to persistify and serve the geometry of HEP experiments. e new mechanism is composed by a new le format and a REST API. e new le format allows to store the whole detector description locally in a at le, and it is especially optimized to descri...

  4. LHCb Conditions database operation assistance systems

    International Nuclear Information System (INIS)

    Clemencic, M; Shapoval, I; Cattaneo, M; Degaudenzi, H; Santinelli, R

    2012-01-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  5. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  6. Two-stage atlas subset selection in multi-atlas based image segmentation.

    Science.gov (United States)

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas

  7. Two-stage atlas subset selection in multi-atlas based image segmentation

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2015-01-01

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  8. Atlas selection for hippocampus segmentation: Relevance evaluation of three meta-information parameters.

    Science.gov (United States)

    Dill, Vanderson; Klein, Pedro Costa; Franco, Alexandre Rosa; Pinho, Márcio Sarroglia

    2018-04-01

    Current state-of-the-art methods for whole and subfield hippocampus segmentation use pre-segmented templates, also known as atlases, in the pre-processing stages. Typically, the input image is registered to the template, which provides prior information for the segmentation process. Using a single standard atlas increases the difficulty in dealing with individuals who have a brain anatomy that is morphologically different from the atlas, especially in older brains. To increase the segmentation precision in these cases, without any manual intervention, multiple atlases can be used. However, registration to many templates leads to a high computational cost. Researchers have proposed to use an atlas pre-selection technique based on meta-information followed by the selection of an atlas based on image similarity. Unfortunately, this method also presents a high computational cost due to the image-similarity process. Thus, it is desirable to pre-select a smaller number of atlases as long as this does not impact on the segmentation quality. To pick out an atlas that provides the best registration, we evaluate the use of three meta-information parameters (medical condition, age range, and gender) to choose the atlas. In this work, 24 atlases were defined and each is based on the combination of the three meta-information parameters. These atlases were used to segment 352 vol from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Hippocampus segmentation with each of these atlases was evaluated and compared to reference segmentations of the hippocampus, which are available from ADNI. The use of atlas selection by meta-information led to a significant gain in the Dice similarity coefficient, which reached 0.68 ± 0.11, compared to 0.62 ± 0.12 when using only the standard MNI152 atlas. Statistical analysis showed that the three meta-information parameters provided a significant improvement in the segmentation accuracy. Copyright © 2018 Elsevier Ltd

  9. The Monitoring and Calibration Web Systems for the ATLAS Tile Calorimeter Data Quality Analysis

    CERN Document Server

    Sivolella, A; The ATLAS collaboration; Ferreira, F

    2012-01-01

    The Tile Calorimeter (TileCal), one of the ATLAS detectors, has four partitions, where each one contains 64 modules and each module has up to 48 PhotoMulTipliers (PMTs), totalizing more than 10,000 electronic channels. The Monitoring and Calibration Web System (MCWS) supports data quality analyses at channels level. This application was developed to assess the detector status and verify its performance, presenting the problematic known channels list from the official database that stores the detector conditions data (COOL). The bad channels list guides the data quality validator during analyses in order to identify new problematic channels. Through the system, it is also possible to update the channels list directly in the COOL database. MCWS generates results, as eta-phi plots and comparative tables with masked channels percentage, which concerns TileCal status, and it is accessible by all ATLAS collaboration. Annually, there is an intervention on LHC (Large Hadronic Collider) when the detector equipments (P...

  10. Modern SQL and NoSQL database technologies for the ATLAS experiment

    CERN Document Server

    Barberis, Dario; The ATLAS collaboration

    2017-01-01

    Structured data storage technologies evolve very rapidly in the IT world. LHC experiments, and ATLAS in particular, try to select and use these technologies balancing the performance for a given set of use cases with the availability, ease of use and of getting support, and stability of the product. We definitely and definitively moved from the “one fits all” (or “all has to fit into one”) paradigm to choosing the best solution for each group of data and for the applications that use these data. This paper describes the solutions in use, or under study, for the ATLAS experiment and their selection process and performance measurements.

  11. Modern SQL and NoSQL database technologies for the ATLAS experiment

    CERN Document Server

    Barberis, Dario; The ATLAS collaboration

    2017-01-01

    Structured data storage technologies evolve very rapidly in the IT world. LHC experiments, and ATLAS in particular, try to select and use these technologies balancing the performance for a given set of use cases with the availability, ease of use and of getting support, and stability of the product. We definitely and definitively moved from the “one fits all” (or “all has to fit into one”) paradigm to choosing the best solution for each group of data and for the applications that use these data. This talk describes the solutions in use, or under study, for the ATLAS experiment and their selection process and performance.

  12. ATLAS EventIndex Data Collection Supervisor and Web Interface

    CERN Document Server

    Garcia Montoro, Carlos; The ATLAS collaboration; Sanchez, Javier

    2016-01-01

    The EventIndex project consists in the development and deployment of a complete catalogue of events for the ATLAS experiment [1][2] at the LHC accelerator at CERN. In 2015 the ATLAS experiment has produced 12 billion real events in 1 million files, and 5 billion simulated events in 8 million files. The ATLAS EventIndex is running in production since mid-2015, reliably collecting information worldwide about all produced events and storing them in a central Hadoop infrastructure. A subset of this information is copied to an Oracle relational database. This paper presents two components of the ATLAS EventIndex [3]: its data collection supervisor and its web interface partner.

  13. ATLAS EventIndex Data Collection Supervisor and Web Interface

    CERN Document Server

    Garcia Montoro, Carlos; The ATLAS collaboration

    2016-01-01

    The EventIndex project consists in the development and deployment of a complete catalogue of events for the ATLAS experiment at the LHC accelerator at CERN. In 2015 the ATLAS experiment has produced 12 billion real events in 1 million files, and 5 billion simulated events in 8 million files. The ATLAS EventIndex is running in production since mid- 2015, reliably collecting information worldwide about all produced events and storing them in a central Hadoop infrastructure. A subset of this information is copied to an Oracle relational database. These slides present two components of the ATLAS EventIndex: its data collection supervisor and its web interface partner.

  14. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  15. The JANA calibrations and conditions database API

    International Nuclear Information System (INIS)

    Lawrence, David

    2010-01-01

    Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and the environment relieving developers from implementing such details.

  16. The JANA calibrations and conditions database API

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence, David, E-mail: davidl@jlab.or [12000 Jefferson Ave., Suite 8, Newport News, VA 23601 (United States)

    2010-04-01

    Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and the environment relieving developers from implementing such details.

  17. Time-Critical Database Conditions Data-Handling for the CMS Experiment

    CERN Document Server

    De Gruttola, M; Innocente, V; Pierro, A

    2011-01-01

    Automatic, synchronous and of course reliable population of the condition database is critical for the correct operation of the online selection as well as of the offline reconstruction and data analysis. We will describe here the system put in place in the CMS experiment to automate the processes to populate centrally the database and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are ``dropped{''} by the users in a dedicated service which synchronizes them and takes care of writing them into the online database. Then they are automatically streamed to the offline database, hence immediately accessible offline worldwide. This mechanism was intensively used during 2008 and 2009 operation with cosmic ray challenges and first LHC collision data, and many improvements were done so far. The experience of this first years of operation will be discussed in detail.

  18. Engineering the ATLAS TAG Browser

    CERN Document Server

    Zhang, Q; The ATLAS collaboration

    2011-01-01

    ELSSI is a web-based event metadata (TAG) browser and event-level selection service for ATLAS. TAGs from all ATLAS physics and Monte Carlo data sets are routinely loaded into Oracle databases as an integral part of event processing. As data volumes increase, more and more sites are joining the distributed TAG data hosting topology. Meanwhile, TAG content and database schemata continue to evolve as new user requirements and additional sources of metadata emerge. All of this has posed many challenges to the development of ELSSI, which must support vast amounts of TAG data while source, content, geographic locations, and user query patterns may change over time. In this paper, we describe some of the challenges encountered in the process of developing ELSSI, and the software engineering strategies adopted to address those challenges. Approaches to management of access to data, browsing, data rendering, query building, query validation, execution, connection management, and communication with auxiliary services a...

  19. Engineering the ATLAS TAG Browser

    CERN Document Server

    Zhang, Q; The ATLAS collaboration

    2011-01-01

    ELSSI is a web-based event metadata (TAG) browser and event-level selection service for ATLAS. TAGs from all ATLAS physics and Monte Carlo data sets are routinely loaded into Oracle databases as an integral part of event processing. As data volumes increase, more and more sites are joining the distributed TAG data hosting topology[1]. Meanwhile, TAG content and database schemata continue to evolve as new user requirements and additional sources of metadata emerge. All of this has posed many challenges to the development of ELSSI, which must support vast amounts of TAG data while source, content, geographic locations, and user query patterns may change over time. In this paper, we describe some of the challenges encountered in the process of developing ELSSI, and the software engineering strategies adopted to address those challenges. Approaches to management of access to data, browsing, data rendering, query building, query validation, execution, connection management, and communication with auxiliary service...

  20. Construction of patient specific atlases from locally most similar anatomical pieces

    Science.gov (United States)

    Ramus, Liliane; Commowick, Olivier; Malandain, Grégoire

    2010-01-01

    Radiotherapy planning requires accurate delineations of the critical structures. To avoid manual contouring, atlas-based segmentation can be used to get automatic delineations. However, the results strongly depend on the chosen atlas, especially for the head and neck region where the anatomical variability is high. To address this problem, atlases adapted to the patient’s anatomy may allow for a better registration, and already showed an improvement in segmentation accuracy. However, building such atlases requires the definition of a criterion to select among a database the images that are the most similar to the patient. Moreover, the inter-expert variability of manual contouring may be high, and therefore bias the segmentation if selecting only one image for each region. To tackle these issues, we present an original method to design a piecewise most similar atlas. Given a query image, we propose an efficient criterion to select for each anatomical region the K most similar images among a database by considering local volume variations possibly induced by the tumor. Then, we present a new approach to combine the K images selected for each region into a piecewise most similar template. Our results obtained with 105 CT images of the head and neck show that our method reduces the over-segmentation seen with an average atlas while being robust to inter-expert manual segmentation variability. PMID:20879395

  1. Class dependency of fuzzy relational database using relational calculus and conditional probability

    Science.gov (United States)

    Deni Akbar, Mohammad; Mizoguchi, Yoshihiro; Adiwijaya

    2018-03-01

    In this paper, we propose a design of fuzzy relational database to deal with a conditional probability relation using fuzzy relational calculus. In the previous, there are several researches about equivalence class in fuzzy database using similarity or approximate relation. It is an interesting topic to investigate the fuzzy dependency using equivalence classes. Our goal is to introduce a formulation of a fuzzy relational database model using the relational calculus on the category of fuzzy relations. We also introduce general formulas of the relational calculus for the notion of database operations such as ’projection’, ’selection’, ’injection’ and ’natural join’. Using the fuzzy relational calculus and conditional probabilities, we introduce notions of equivalence class, redundant, and dependency in the theory fuzzy relational database.

  2. An anatomic transcriptional atlas of human glioblastoma.

    Science.gov (United States)

    Puchalski, Ralph B; Shah, Nameeta; Miller, Jeremy; Dalley, Rachel; Nomura, Steve R; Yoon, Jae-Guen; Smith, Kimberly A; Lankerovich, Michael; Bertagnolli, Darren; Bickley, Kris; Boe, Andrew F; Brouner, Krissy; Butler, Stephanie; Caldejon, Shiella; Chapin, Mike; Datta, Suvro; Dee, Nick; Desta, Tsega; Dolbeare, Tim; Dotson, Nadezhda; Ebbert, Amanda; Feng, David; Feng, Xu; Fisher, Michael; Gee, Garrett; Goldy, Jeff; Gourley, Lindsey; Gregor, Benjamin W; Gu, Guangyu; Hejazinia, Nika; Hohmann, John; Hothi, Parvinder; Howard, Robert; Joines, Kevin; Kriedberg, Ali; Kuan, Leonard; Lau, Chris; Lee, Felix; Lee, Hwahyung; Lemon, Tracy; Long, Fuhui; Mastan, Naveed; Mott, Erika; Murthy, Chantal; Ngo, Kiet; Olson, Eric; Reding, Melissa; Riley, Zack; Rosen, David; Sandman, David; Shapovalova, Nadiya; Slaughterbeck, Clifford R; Sodt, Andrew; Stockdale, Graham; Szafer, Aaron; Wakeman, Wayne; Wohnoutka, Paul E; White, Steven J; Marsh, Don; Rostomily, Robert C; Ng, Lydia; Dang, Chinh; Jones, Allan; Keogh, Bart; Gittleman, Haley R; Barnholtz-Sloan, Jill S; Cimino, Patrick J; Uppin, Megha S; Keene, C Dirk; Farrokhi, Farrokh R; Lathia, Justin D; Berens, Michael E; Iavarone, Antonio; Bernard, Amy; Lein, Ed; Phillips, John W; Rostad, Steven W; Cobbs, Charles; Hawrylycz, Michael J; Foltz, Greg D

    2018-05-11

    Glioblastoma is an aggressive brain tumor that carries a poor prognosis. The tumor's molecular and cellular landscapes are complex, and their relationships to histologic features routinely used for diagnosis are unclear. We present the Ivy Glioblastoma Atlas, an anatomically based transcriptional atlas of human glioblastoma that aligns individual histologic features with genomic alterations and gene expression patterns, thus assigning molecular information to the most important morphologic hallmarks of the tumor. The atlas and its clinical and genomic database are freely accessible online data resources that will serve as a valuable platform for future investigations of glioblastoma pathogenesis, diagnosis, and treatment. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  3. A Conditions Data Management System for HEP Experiments

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00037318; The ATLAS collaboration

    2017-01-01

    Conditions data infrastructure for both ATLAS and CMS have to deal with the management of several Terabytes of data. Distributed computing access to this data requires particular care and attention to manage request-rates of up to several tens of kHz. Thanks to the large overlap in use cases and requirements, ATLAS and CMS have worked towards a common solution for conditions data management with the aim of using this design for data-taking in Run 3. In the meantime other experiments, including NA62, have expressed an interest in this cross-experiment initiative. For experiments with a smaller payload volume and complexity, there is particular interest in simplifying the payload storage. The conditions data management model is implemented in a small set of relational database tables. A prototype access toolkit consisting of an intermediate web server has been implemented, using standard technologies available in the Java community. Access is provided through a set of REST services for which the API has been de...

  4. Collecting conditions usage metadata to optimize current and future ATLAS software and processing

    CERN Document Server

    Barberis, Dario; The ATLAS collaboration; Gallas, Elizabeth; Oda, Susumu

    2016-01-01

    Conditions data (for example: alignment, calibration, data quality) are used extensively in the processing of real and simulated data in ATLAS. The volume and variety of the conditions data needed by different types of processing are quite diverse, so optimizing its access requires a careful understanding of conditions usage patterns. These patterns can be quantified by mining representative log files from each type of processing and gathering detailed information about conditions usage for that type of processing into a central repository. In this presentation, we describe the systems developed to collect this conditions usage metadata per job type and describe a few specific (but very different) ways in which it has been used. For example, it can be used to cull specific conditions data into a much more compact package to be used by jobs doing similar types of processing: these customized collections can then be shipped with jobs to be executed on isolated worker nodes (such as HPC farms) that have no netwo...

  5. The SysteMHC Atlas project.

    Science.gov (United States)

    Shao, Wenguang; Pedrioli, Patrick G A; Wolski, Witold; Scurtescu, Cristian; Schmid, Emanuel; Vizcaíno, Juan A; Courcelles, Mathieu; Schuster, Heiko; Kowalewski, Daniel; Marino, Fabio; Arlehamn, Cecilia S L; Vaughan, Kerrie; Peters, Bjoern; Sette, Alessandro; Ottenhoff, Tom H M; Meijgaarden, Krista E; Nieuwenhuizen, Natalie; Kaufmann, Stefan H E; Schlapbach, Ralph; Castle, John C; Nesvizhskii, Alexey I; Nielsen, Morten; Deutsch, Eric W; Campbell, David S; Moritz, Robert L; Zubarev, Roman A; Ytterberg, Anders Jimmy; Purcell, Anthony W; Marcilla, Miguel; Paradela, Alberto; Wang, Qi; Costello, Catherine E; Ternette, Nicola; van Veelen, Peter A; van Els, Cécile A C M; Heck, Albert J R; de Souza, Gustavo A; Sollid, Ludvig M; Admon, Arie; Stevanovic, Stefan; Rammensee, Hans-Georg; Thibault, Pierre; Perreault, Claude; Bassani-Sternberg, Michal; Aebersold, Ruedi; Caron, Etienne

    2018-01-04

    Mass spectrometry (MS)-based immunopeptidomics investigates the repertoire of peptides presented at the cell surface by major histocompatibility complex (MHC) molecules. The broad clinical relevance of MHC-associated peptides, e.g. in precision medicine, provides a strong rationale for the large-scale generation of immunopeptidomic datasets and recent developments in MS-based peptide analysis technologies now support the generation of the required data. Importantly, the availability of diverse immunopeptidomic datasets has resulted in an increasing need to standardize, store and exchange this type of data to enable better collaborations among researchers, to advance the field more efficiently and to establish quality measures required for the meaningful comparison of datasets. Here we present the SysteMHC Atlas (https://systemhcatlas.org), a public database that aims at collecting, organizing, sharing, visualizing and exploring immunopeptidomic data generated by MS. The Atlas includes raw mass spectrometer output files collected from several laboratories around the globe, a catalog of context-specific datasets of MHC class I and class II peptides, standardized MHC allele-specific peptide spectral libraries consisting of consensus spectra calculated from repeat measurements of the same peptide sequence, and links to other proteomics and immunology databases. The SysteMHC Atlas project was created and will be further expanded using a uniform and open computational pipeline that controls the quality of peptide identifications and peptide annotations. Thus, the SysteMHC Atlas disseminates quality controlled immunopeptidomic information to the public domain and serves as a community resource toward the generation of a high-quality comprehensive map of the human immunopeptidome and the support of consistent measurement of immunopeptidomic sample cohorts. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Image File - TP Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ption of data contents Network diagrams (in PNG format) for each project. One project has one pathway file o...List Contact us TP Atlas Image File Data detail Data name Image File DOI 10.18908/lsdba.nbdc01161-004 Descri

  7. Big Data tools as applied to ATLAS event data

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00225336; The ATLAS collaboration; Gardner, Robert; Bryant, Lincoln

    2017-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Logfiles, database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and associated analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data. Such modes would simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning environments and to...

  8. Calibration for the ATLAS Level-1 Calorimeter-Trigger

    International Nuclear Information System (INIS)

    Foehlisch, F.

    2007-01-01

    This thesis describes developments and tests that are necessary to operate the Pre-Processor of the ATLAS Level-1 Calorimeter Trigger for data acquisition. The major tasks of Pre-Processor comprise the digitizing, time-alignment and the calibration of signals that come from the ATLAS calorimeter. Dedicated hardware has been developed that must be configured in order to fulfill these tasks. Software has been developed that implements the register-model of the Pre-Processor Modules and allows to set up the Pre-Processor. In order to configure the Pre-Processor in the context of an ATLAS run, user-settings and the results of calibration measurements are used to derive adequate settings for registers of the Pre-Processor. The procedures that allow to perform the required measurements and store the results into a database are demonstrated. Furthermore, tests that go along with the ATLAS installation are presented and results are shown. (orig.)

  9. Calibration for the ATLAS Level-1 Calorimeter-Trigger

    Energy Technology Data Exchange (ETDEWEB)

    Foehlisch, F.

    2007-12-19

    This thesis describes developments and tests that are necessary to operate the Pre-Processor of the ATLAS Level-1 Calorimeter Trigger for data acquisition. The major tasks of Pre-Processor comprise the digitizing, time-alignment and the calibration of signals that come from the ATLAS calorimeter. Dedicated hardware has been developed that must be configured in order to fulfill these tasks. Software has been developed that implements the register-model of the Pre-Processor Modules and allows to set up the Pre-Processor. In order to configure the Pre-Processor in the context of an ATLAS run, user-settings and the results of calibration measurements are used to derive adequate settings for registers of the Pre-Processor. The procedures that allow to perform the required measurements and store the results into a database are demonstrated. Furthermore, tests that go along with the ATLAS installation are presented and results are shown. (orig.)

  10. Development of a Whole Body Atlas for Radiation Therapy Planning and Treatment Optimization

    International Nuclear Information System (INIS)

    Qatarneh, Sharif

    2006-01-01

    The main objective of radiation therapy is to obtain the highest possible probability of tumor cure while minimizing adverse reactions in healthy tissues. A crucial step in the treatment process is to determine the location and extent of the primary tumor and its loco regional lymphatic spread in relation to adjacent radiosensitive anatomical structures and organs at risk. These volumes must also be accurately delineated with respect to external anatomic reference points, preferably on surrounding bony structures. At the same time, it is essential to have the best possible physical and radiobiological knowledge about the radiation responsiveness of the target tissues and organs at risk in order to achieve a more accurate optimization of the treatment outcome. A computerized whole body Atlas has therefore been developed to serve as a dynamic database, with systematically integrated knowledge, comprising all necessary physical and radiobiological information about common target volumes and normal tissues. The Atlas also contains a database of segmented organs and a lymph node topography, which was based on the Visible Human dataset, to form standard reference geometry of organ systems. The reference knowledge base and the standard organ dataset can be utilized for Atlas-based image processing and analysis in radiation therapy planning and for biological optimization of the treatment outcome. Atlas-based segmentation procedures were utilized to transform the reference organ dataset of the Atlas into the geometry of individual patients. The anatomic organs and target volumes of the database can be converted by elastic transformation into those of the individual patient for final treatment planning. Furthermore, a database of reference treatment plans was started by implementing state-of-the-art biologically based radiation therapy planning techniques such as conformal, intensity modulated, and radio biologically optimized treatment planning. The computerized Atlas can

  11. Run 2 ATLAS Trigger and Detector Performance

    CERN Document Server

    Solovyanov, Oleg; The ATLAS collaboration

    2018-01-01

    The 2nd LHC run has started in June 2015 with a proton-proton centre-of-mass collision energy of 13 TeV. During the years 2016 and 2017, LHC delivered an unprecedented amount of luminosity under the ever-increasing challenging conditions in terms of peak luminosity, pile-up and trigger rates. In this talk, the LHC running conditions and the improvements made to the ATLAS experiment in the course of Run 2 will be discussed, and the latest ATLAS detector and ATLAS trigger performance results from the Run 2 will be presented.

  12. The Mitochondrial Protein Atlas: A Database of Experimentally Verified Information on the Human Mitochondrial Proteome.

    Science.gov (United States)

    Godin, Noa; Eichler, Jerry

    2017-09-01

    Given its central role in various biological systems, as well as its involvement in numerous pathologies, the mitochondrion is one of the best-studied organelles. However, although the mitochondrial genome has been extensively investigated, protein-level information remains partial, and in many cases, hypothetical. The Mitochondrial Protein Atlas (MPA; URL: lifeserv.bgu.ac.il/wb/jeichler/MPA ) is a database that provides a complete, manually curated inventory of only experimentally validated human mitochondrial proteins. The MPA presently contains 911 unique protein entries, each of which is associated with at least one experimentally validated and referenced mitochondrial localization. The MPA also contains experimentally validated and referenced information defining function, structure, involvement in pathologies, interactions with other MPA proteins, as well as the method(s) of analysis used in each instance. Connections to relevant external data sources are offered for each entry, including links to NCBI Gene, PubMed, and Protein Data Bank. The MPA offers a prototype for other information sources that allow for a distinction between what has been confirmed and what remains to be verified experimentally.

  13. Remote control of ATLAS-MPX Network and Data Visualization

    International Nuclear Information System (INIS)

    Turecek, D.; Holy, T.; Pospisil, S.; Vykydal, Z.

    2011-01-01

    The ATLAS-MPX Network is a network of 15 Medipix2-based detector devices, installed in various positions in the ATLAS detector at CERN, Geneva. The aim of the network is to perform a real-time measurement of the spectral characteristics and the composition of radiation inside the ATLAS detector during its operation. The remote control system of ATLAS-MPX controls and configures all the devices from one place, via a web interface, accessible from different operating systems. The Data Visualization application, also with a web interface, has been developed in order to present measured data to the scientific community. It allows to browse through recorded frames from all devices and to search for specific frames by date and time. Charts containing the number of different types of tracks in each frame as a function of time may be rendered from the database.

  14. Interactive microbial distribution analysis using BioAtlas

    DEFF Research Database (Denmark)

    Lund, Jesper; List, Markus; Baumbach, Jan

    2017-01-01

    body maps and (iii) user-defined maps. It further allows for (iv) uploading of own sample data, which can be placed on existing maps to (v) browse the distribution of the associated taxonomies. Finally, BioAtlas enables users to (vi) contribute custom maps (e.g. for plants or animals) and to map...... to analyze microbial distribution in a location-specific context. BioAtlas is an interactive web application that closes this gap between sequence databases, taxonomy profiling and geo/body-location information. It enables users to browse taxonomically annotated sequences across (i) the world map, (ii) human...

  15. ATLAS operations in the GridKa T1/T2 Cloud

    International Nuclear Information System (INIS)

    Duckeck, G; Serfon, C; Walker, R; Harenberg, T; Kalinin, S; Schultes, J; Kawamura, G; Leffhalm, K; Meyer, J; Nderitu, S; Olszewski, A; Petzold, A; Sundermann, J E

    2011-01-01

    The ATLAS GridKa cloud consists of the GridKa Tier1 centre and 12 Tier2 sites from five countries associated to it. Over the last years a well defined and tested operation model evolved. Several core cloud services need to be operated and closely monitored: distributed data management, involving data replication, deletion and consistency checks; support for ATLAS production activities, which includes Monte Carlo simulation, reprocessing and pilot factory operation; continuous checks of data availability and performance for user analysis; software installation and database setup. Of crucial importance is good communication between sites, operations team and ATLAS as well as efficient cloud level monitoring tools. The paper gives an overview of the operations model and ATLAS services within the cloud.

  16. Big Data Analytics Tools as Applied to ATLAS Event Data

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration

    2016-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Log file data and database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data so as to simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of big data, statistical and machine learning tools...

  17. NATIONAL TRANSPORTATION ATLAS DATABASE: RAILROADS 2011

    Data.gov (United States)

    Kansas Data Access and Support Center — The Rail Network is a comprehensive database of the nation's railway system at the 1:100,000 scale or better. The data set covers all 50 States plus the District of...

  18. The ATLAS Electron and Photon Trigger

    CERN Document Server

    Jones, Samuel David; The ATLAS collaboration

    2018-01-01

    ATLAS electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential to record signals for a wide variety of physics: from Standard Model processes to searches for new phenomena. To cope with ever-increasing luminosity and more challenging pile-up conditions at a centre-of-mass energy of 13 TeV, the trigger selections need to be optimized to control the rates and keep efficiencies high. The ATLAS electron and photon trigger performance in Run 2 will be presented, including both the role of the ATLAS calorimeter in electron and photon identification and details of new techniques developed to maintain high performance even in high pile-up conditions.

  19. Land Condition Trend Analysis Avian Database: Ecological Guild-based Summaries

    National Research Council Canada - National Science Library

    Schreiber, Eric

    1998-01-01

    Land Condition Trend Analysis (LCTA) bird database documentation capabilities often are limited to the generation of installation-wide species checklists, estimates of relative abundance, and evidence of breeding activity...

  20. TU-AB-202-10: How Effective Are Current Atlas Selection Methods for Atlas-Based Auto-Contouring in Radiotherapy Planning?

    Energy Technology Data Exchange (ETDEWEB)

    Peressutti, D; Schipaanboord, B; Kadir, T; Gooding, M [Mirada Medical Limited, Science and Medical Technology, Oxford (United Kingdom); Soest, J van; Lustberg, T; Elmpt, W van; Dekker, A [Maastricht University Medical Centre, Department of Radiation Oncology MAASTRO - GROW School for Oncology Developmental Biology, Maastricht (Netherlands)

    2016-06-15

    Purpose: To investigate the effectiveness of atlas selection methods for improving atlas-based auto-contouring in radiotherapy planning. Methods: 275 H&N clinically delineated cases were employed as an atlas database from which atlases would be selected. A further 40 previously contoured cases were used as test patients against which atlas selection could be performed and evaluated. 26 variations of selection methods proposed in the literature and used in commercial systems were investigated. Atlas selection methods comprised either global or local image similarity measures, computed after rigid or deformable registration, combined with direct atlas search or with an intermediate template image. Workflow Box (Mirada-Medical, Oxford, UK) was used for all auto-contouring. Results on brain, brainstem, parotids and spinal cord were compared to random selection, a fixed set of 10 “good” atlases, and optimal selection by an “oracle” with knowledge of the ground truth. The Dice score and the average ranking with respect to the “oracle” were employed to assess the performance of the top 10 atlases selected by each method. Results: The fixed set of “good” atlases outperformed all of the atlas-patient image similarity-based selection methods (mean Dice 0.715 c.f. 0.603 to 0.677). In general, methods based on exhaustive comparison of local similarity measures showed better average Dice scores (0.658 to 0.677) compared to the use of either template image (0.655 to 0.672) or global similarity measures (0.603 to 0.666). The performance of image-based selection methods was found to be only slightly better than a random (0.645). Dice scores given relate to the left parotid, but similar results patterns were observed for all organs. Conclusion: Intuitively, atlas selection based on the patient CT is expected to improve auto-contouring performance. However, it was found that published approaches performed marginally better than random and use of a fixed set of

  1. Renewable energy atlas of the United States.

    Energy Technology Data Exchange (ETDEWEB)

    Kuiper, J.A.; Hlava, K.Greenwood, H.; Carr, A. (Environmental Science Division)

    2012-05-01

    The Renewable Energy Atlas (Atlas) of the United States is a compilation of geospatial data focused on renewable energy resources, federal land ownership, and base map reference information. It is designed for the U.S. Department of Agriculture Forest Service (USFS) and other federal land management agencies to evaluate existing and proposed renewable energy projects. Much of the content of the Atlas was compiled at Argonne National Laboratory (Argonne) to support recent and current energy-related Environmental Impact Statements and studies, including the following projects: (1) West-wide Energy Corridor Programmatic Environmental Impact Statement (PEIS) (BLM 2008); (2) Draft PEIS for Solar Energy Development in Six Southwestern States (DOE/BLM 2010); (3) Supplement to the Draft PEIS for Solar Energy Development in Six Southwestern States (DOE/BLM 2011); (4) Upper Great Plains Wind Energy PEIS (WAPA/USFWS 2012, in progress); and (5) Energy Transport Corridors: The Potential Role of Federal Lands in States Identified by the Energy Policy Act of 2005, Section 368(b) (in progress). This report explains how to add the Atlas to your computer and install the associated software; describes each of the components of the Atlas; lists the Geographic Information System (GIS) database content and sources; and provides a brief introduction to the major renewable energy technologies.

  2. ATLAS Detector Control System Data Viewer

    CERN Document Server

    Tsarouchas, Charilaos; Roe, S; Bitenc, U; Fehling-Kaschek, ML; Winkelmann, S; D’Auria, S; Hoffmann, D; Pisano, O

    2011-01-01

    The ATLAS experiment at CERN is one of the four Large Hadron Collider experiments. DCS Data Viewer (DDV) is a web interface application that provides access to historical data of ATLAS Detector Control System [1] (DCS) parameters written to the database (DB). It has a modular andflexible design and is structured using a clientserver architecture. The server can be operated stand alone with a command-line interface to the data while the client offers a user friendly, browser independent interface. The selection of the metadata of DCS parameters is done via a column-tree view or with a powerful search engine. The final visualisation of the data is done using various plugins such as “value over time” charts, data tables, raw ASCII or structured export to ROOT. Excessive access or malicious use of the database is prevented by dedicated protection mechanisms, allowing the exposure of the tool to hundreds of inexperienced users. The metadata selection and data output features can be used separately by XML con...

  3. Luminosity Measurements with the ATLAS Detector

    CERN Document Server

    Maettig, Stefan; Pauly, T

    For almost all measurements performed at the Large Hadron Collider (LHC) one crucial ingredient is the precise knowledge about the integrated luminosity. The determination and precision on the integrated luminosity has direct implications on any cross-section measurement, and its instantaneous measurement gives important feedback on the conditions at the experimental insertions and on the accelerator performance. ATLAS is one of the main experiments at the LHC. In order to provide an accurate and reliable luminosity determination, ATLAS uses a variety of different sub-detectors and algorithms that measure the luminosity simultaneously. One of these sub-detectors are the Beam Condition Monitors (BCM) that were designed to protect the ATLAS detector from potentially dangerous beam losses. Due to its fast readout and very clean signals this diamond detector is providing in addition since May 2011 the official ATLAS luminosity. This thesis describes the calibration and performance of the BCM as a luminosity detec...

  4. ATLAS Simulation using Real Data: Embedding and Overlay

    Science.gov (United States)

    Haas, Andrew; ATLAS Collaboration

    2017-10-01

    For some physics processes studied with the ATLAS detector, a more accurate simulation in some respects can be achieved by including real data into simulated events, with substantial potential improvements in the CPU, disk space, and memory usage of the standard simulation configuration, at the cost of significant database and networking challenges. Real proton-proton background events can be overlaid (at the detector digitization output stage) on a simulated hard-scatter process, to account for pileup background (from nearby bunch crossings), cavern background, and detector noise. A similar method is used to account for the large underlying event from heavy ion collisions, rather than directly simulating the full collision. Embedding replaces the muons found in Z→μμ decays in data with simulated taus at the same 4-momenta, thus preserving the underlying event and pileup from the original data event. In all these cases, care must be taken to exactly match detector conditions (beamspot, magnetic fields, alignments, dead sensors, etc.) between the real data event and the simulation. We will discuss the status of these overlay and embedding techniques within ATLAS software and computing.

  5. Automatic structural parcellation of mouse brain MRI using multi-atlas label fusion.

    Directory of Open Access Journals (Sweden)

    Da Ma

    Full Text Available Multi-atlas segmentation propagation has evolved quickly in recent years, becoming a state-of-the-art methodology for automatic parcellation of structural images. However, few studies have applied these methods to preclinical research. In this study, we present a fully automatic framework for mouse brain MRI structural parcellation using multi-atlas segmentation propagation. The framework adopts the similarity and truth estimation for propagated segmentations (STEPS algorithm, which utilises a locally normalised cross correlation similarity metric for atlas selection and an extended simultaneous truth and performance level estimation (STAPLE framework for multi-label fusion. The segmentation accuracy of the multi-atlas framework was evaluated using publicly available mouse brain atlas databases with pre-segmented manually labelled anatomical structures as the gold standard, and optimised parameters were obtained for the STEPS algorithm in the label fusion to achieve the best segmentation accuracy. We showed that our multi-atlas framework resulted in significantly higher segmentation accuracy compared to single-atlas based segmentation, as well as to the original STAPLE framework.

  6. A geochemical atlas of North Carolina, USA

    Science.gov (United States)

    Reid, J.C.

    1993-01-01

    A geochemical atlas of North Carolina, U.S.A., was prepared using National Uranium Resource Evaluation (NURE) stream-sediment data. Before termination of the NURE program, sampling of nearly the entire state (48,666 square miles of land area) was completed and geochemical analyses were obtained. The NURE data are applicable to mineral exploration, agriculture, waste disposal siting issues, health, and environmental studies. Applications in state government include resource surveys to assist mineral exploration by identifying geochemical anomalies and areas of mineralization. Agriculture seeks to identify areas with favorable (or unfavorable) conditions for plant growth, disease, and crop productivity. Trace elements such as cobalt, copper, chromium, iron, manganese, zinc, and molybdenum must be present within narrow ranges in soils for optimum growth and productivity. Trace elements as a contributing factor to disease are of concern to health professionals. Industry can use pH and conductivity data for water samples to site facilities which require specific water quality. The North Carolina NURE database consists of stream-sediment samples, groundwater samples, and stream-water analyses. The statewide database consists of 6,744 stream-sediment sites, 5,778 groundwater sample sites, and 295 stream-water sites. Neutron activation analyses were provided for U, Br, Cl, F, Mn, Na, Al, V, Dy in groundwater and stream water, and for U, Th, Hf, Ce, Fe, Mn, Na, Sc, Ti, V, Al, Dy, Eu, La, Sm, Yb, and Lu in stream sediments. Supplemental analyses by other techniques were reported on U (extractable), Ag, As, Ba, Be, Ca, Co, Cr, Cu, K, Li, Mg, Mo, Nb, Ni, P, Pb, Se, Sn, Sr, W, Y, and Zn for 4,619 stream-sediment samples. A small subset of 334 stream samples was analyzed for gold. The goal of the atlas was to make available the statewide NURE data with minimal interpretation to enable prospective users to modify and manipulate the data for their end use. The atlas provides only

  7. The Atlas of Health and Working Conditions by Occupation. 1. Occupational ranking lists and occupational profiles from periodical occupational health survey data

    NARCIS (Netherlands)

    Broersen, J. P.; van Dijk, F. J.; Weel, A. N.; Verbeek, J. H.

    1995-01-01

    In this article, we describe methods which have been applied in the compilation of the Atlas of Health and Working conditions by Occupation. First, we discuss the need for information systems to identify problems concerning working conditions and health. Such information systems have an exploratory

  8. ATLAS Grid Data Processing: system evolution and scalability

    CERN Document Server

    Golubkov, D; The ATLAS collaboration; Klimentov, A; Minaenko, A; Nevski, P; Vaniachine, A; Walker, R

    2012-01-01

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software & Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users provi...

  9. BioAtlas: Interactive web service for microbial distribution analysis

    DEFF Research Database (Denmark)

    Lund, Jesper; List, Markus; Baumbach, Jan

    Massive amounts of 16S rRNA sequencing data have been stored in publicly accessible databases, such as GOLD, SILVA, GreenGenes (GG), and the Ribosomal Database Project (RDP). Many of these sequences are tagged with geo-locations. Nevertheless, researchers currently lack a user-friendly tool...... to analyze microbial distribution in a location-specific context. BioAtlas is an interactive web application that closes this gap between sequence databases, taxonomy profiling and geo/body-location information. It enables users to browse taxonomically annotated sequences across (i) the world map, (ii) human...

  10. Atlas of Nuclear Isomers

    International Nuclear Information System (INIS)

    Jain, Ashok Kumar; Maheshwari, Bhoomika; Garg, Swati; Patial, Monika; Singh, Balraj

    2015-01-01

    We present an atlas of nuclear isomers containing the experimental data for the isomers with a half-life ≥ 10 ns together with their various properties such as excitation-energy, half-life, decay mode(s), spin-parity, energies and multipolarities of emitted gamma transitions, etc. The ENSDF database complemented by the XUNDL database has been extensively used in extracting the relevant data. Recent literature from primary nuclear physics journals, and the NSR bibliographic database have been searched to ensure that the compiled data Table is as complete and current as possible. The data from NUBASE-12 have also been checked for completeness, but as far as possible original references have been cited. Many interesting systematic features of nuclear isomers emerge, some of them new; these are discussed and presented in various graphs and figures. The cutoff date for the extraction of data from the literature is August 15, 2015

  11. Atlas of Nuclear Isomers

    Energy Technology Data Exchange (ETDEWEB)

    Jain, Ashok Kumar, E-mail: ajainfph@iitr.ac.in [Department of Physics, Indian Institute of Technology, Roorkee-247667 (India); Maheshwari, Bhoomika; Garg, Swati; Patial, Monika [Department of Physics, Indian Institute of Technology, Roorkee-247667 (India); Singh, Balraj [Department of Physics and Astronomy, McMaster University, Hamilton, Ontario-L8S 4M1 (Canada)

    2015-09-15

    We present an atlas of nuclear isomers containing the experimental data for the isomers with a half-life ≥ 10 ns together with their various properties such as excitation-energy, half-life, decay mode(s), spin-parity, energies and multipolarities of emitted gamma transitions, etc. The ENSDF database complemented by the XUNDL database has been extensively used in extracting the relevant data. Recent literature from primary nuclear physics journals, and the NSR bibliographic database have been searched to ensure that the compiled data Table is as complete and current as possible. The data from NUBASE-12 have also been checked for completeness, but as far as possible original references have been cited. Many interesting systematic features of nuclear isomers emerge, some of them new; these are discussed and presented in various graphs and figures. The cutoff date for the extraction of data from the literature is August 15, 2015.

  12. New developments in file-based infrastructure for ATLAS event selection

    Energy Technology Data Exchange (ETDEWEB)

    Gemmeren, P van; Malon, D M [Argonne National Laboratory, Argonne, Illinois 60439 (United States); Nowak, M, E-mail: gemmeren@anl.go [Brookhaven National Laboratory, Upton, NY 11973-5000 (United States)

    2010-04-01

    In ATLAS software, TAGs are event metadata records that can be stored in various technologies, including ROOT files and relational databases. TAGs are used to identify and extract events that satisfy certain selection predicates, which can be coded as SQL-style queries. TAG collection files support in-file metadata to store information describing all events in the collection. Event Selector functionality has been augmented to provide such collection-level metadata to subsequent algorithms. The ATLAS I/O framework has been extended to allow computational processing of TAG attributes to select or reject events without reading the event data. This capability enables physicists to use more detailed selection criteria than are feasible in an SQL query. For example, the TAGs contain enough information not only to check the number of electrons, but also to calculate their distance to the closest jet-a calculation that would be difficult to express in SQL. Another new development allows ATLAS to write TAGs directly into event data files. This feature can improve performance by supporting advanced event selection capabilities, including computational processing of TAG information, without the need for external TAG file or database access.

  13. High-Performance Secure Database Access Technologies for HEP Grids

    Energy Technology Data Exchange (ETDEWEB)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the

  14. High-Performance Secure Database Access Technologies for HEP Grids

    International Nuclear Information System (INIS)

    Vranicar, Matthew; Weicher, John

    2006-01-01

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist's computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that 'Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications'. There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure

  15. Failure Atlas for Rolling Bearings in Wind Turbines

    Energy Technology Data Exchange (ETDEWEB)

    Tallian, T. E.

    2006-01-01

    This Atlas is structured as a supplement to the book: T.E. Tallian: Failure Atlas for Hertz Contact Machine Elements, 2nd edition, ASME Press New York, (1999). The content of the atlas comprises plate pages from the book that contain bearing failure images, application data, and descriptions of failure mode, image, and suspected failure causes. Rolling bearings are a critical component of the mainshaft system, gearbox and generator in the rapidly developing technology of power generating wind turbines. The demands for long service life are stringent; the design load, speed and temperature regimes are demanding and the environmental conditions including weather, contamination, impediments to monitoring and maintenance are often unfavorable. As a result, experience has shown that the rolling bearings are prone to a variety of failure modes that may prevent achievement of design lives. Morphological failure diagnosis is extensively used in the failure analysis and improvement of bearing operation. Accumulated experience shows that the failure appearance and mode of failure causation in wind turbine bearings has many distinguishing features. The present Atlas is a first effort to collect an interpreted database of specifically wind turbine related rolling bearing failures and make it widely available. This Atlas is structured as a supplement to the book: T. E. Tallian: Failure Atlas for Hertz Contact Machine Elements, 2d edition, ASME Press New York, (1999). The main body of that book is a comprehensive collection of self-contained pages called Plates, containing failure images, bearing and application data, and three descriptions: failure mode, image and suspected failure causes. The Plates are sorted by main failure mode into chapters. Each chapter is preceded by a general technical discussion of the failure mode, its appearance and causes. The Plates part is supplemented by an introductory part, describing the appearance classification and failure classification

  16. Development of Beam Conditions Monitor for the ATLAS experiment

    CERN Document Server

    Dolenc Kittelmann, Irena; Mikuž, M

    2008-01-01

    If there is a failure in an element of the accelerator the resulting beam losses could cause damage to the inner tracking devices of the experiments. This thesis presents the work performed during the development phase of a protection system for the ATLAS experiment at the LHC. The Beam Conditions Monitor (BCM) system is a stand-alone system designed to detect early signs of beam instabilities and trigger a beam abort in case of beam failures. It consists of two detector stations positioned at z=±1.84m from the interaction point. Each station comprises four BCM detector modules installed symmetrically around the beam pipe with sensors located at r=55 mm. This structure will allow distinguishing between anomalous events (beam gas and beam halo interactions, beam instabilities) and normal events due to proton-proton interaction by measuring the time-of-flight as well as the signal pulse amplitude from detector modules on the timescale of nanoseconds. Additionally, the BCM system aims to provide a coarse instan...

  17. Surface Ocean CO2 Atlas Database Version 5 (SOCATv5) (NCEI Accession 0163180)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Surface Ocean CO2 Atlas (SOCAT, www.socat.info) is a synthesis activity by the international marine carbon research community and has more than 100 contributors...

  18. Multilevel Workflow System in the ATLAS Experiment

    International Nuclear Information System (INIS)

    Borodin, M; De, K; Navarro, J Garcia; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2015-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs are executed across more than a hundred distributed computing sites by PanDA - the ATLAS job-level workload management system. On the outer level, the Database Engine for Tasks (DEfT) empowers production managers with templated workflow definitions. On the next level, the Job Execution and Definition Interface (JEDI) is integrated with PanDA to provide dynamic job definition tailored to the sites capabilities. We report on scaling up the production system to accommodate a growing number of requirements from main ATLAS areas: Trigger, Physics and Data Preparation. (paper)

  19. Conditions Database for the Belle II Experiment

    Science.gov (United States)

    Wood, L.; Elsethagen, T.; Schram, M.; Stephan, E.

    2017-10-01

    The Belle II experiment at KEK is preparing for first collisions in 2017. Processing the large amounts of data that will be produced will require conditions data to be readily available to systems worldwide in a fast and efficient manner that is straightforward for both the user and maintainer. The Belle II conditions database was designed with a straightforward goal: make it as easily maintainable as possible. To this end, HEP-specific software tools were avoided as much as possible and industry standard tools used instead. HTTP REST services were selected as the application interface, which provide a high-level interface to users through the use of standard libraries such as curl. The application interface itself is written in Java and runs in an embedded Payara-Micro Java EE application server. Scalability at the application interface is provided by use of Hazelcast, an open source In-Memory Data Grid (IMDG) providing distributed in-memory computing and supporting the creation and clustering of new application interface instances as demand increases. The IMDG provides fast and efficient access to conditions data via in-memory caching.

  20. Direct estimation of patient attributes from anatomical MRI based on multi-atlas voting

    Directory of Open Access Journals (Sweden)

    Dan Wu

    2016-01-01

    Full Text Available MRI brain atlases are widely used for automated image segmentation, and in particular, recent developments in multi-atlas techniques have shown highly accurate segmentation results. In this study, we extended the role of the atlas library from mere anatomical reference to a comprehensive knowledge database with various patient attributes, such as demographic, functional, and diagnostic information. In addition to using the selected (heavily-weighted atlases to achieve high segmentation accuracy, we tested whether the non-anatomical attributes of the selected atlases could be used to estimate patient attributes. This can be considered a context-based image retrieval (CBIR approach, embedded in the multi-atlas framework. We first developed an image similarity measurement to weigh the atlases on a structure-by-structure basis, and then, the attributes of the multiple atlases were weighted to estimate the patient attributes. We tested this concept first by estimating age in a normal population; we then performed functional and diagnostic estimations in Alzheimer's disease patients. The accuracy of the estimated patient attributes was measured against the actual clinical data, and the performance was compared to conventional volumetric analysis. The proposed CBIR framework by multi-atlas voting would be the first step toward a knowledge-based support system for quantitative radiological image reading and diagnosis.

  1. Direct estimation of patient attributes from anatomical MRI based on multi-atlas voting.

    Science.gov (United States)

    Wu, Dan; Ceritoglu, Can; Miller, Michael I; Mori, Susumu

    MRI brain atlases are widely used for automated image segmentation, and in particular, recent developments in multi-atlas techniques have shown highly accurate segmentation results. In this study, we extended the role of the atlas library from mere anatomical reference to a comprehensive knowledge database with various patient attributes, such as demographic, functional, and diagnostic information. In addition to using the selected (heavily-weighted) atlases to achieve high segmentation accuracy, we tested whether the non-anatomical attributes of the selected atlases could be used to estimate patient attributes. This can be considered a context-based image retrieval (CBIR) approach, embedded in the multi-atlas framework. We first developed an image similarity measurement to weigh the atlases on a structure-by-structure basis, and then, the attributes of the multiple atlases were weighted to estimate the patient attributes. We tested this concept first by estimating age in a normal population; we then performed functional and diagnostic estimations in Alzheimer's disease patients. The accuracy of the estimated patient attributes was measured against the actual clinical data, and the performance was compared to conventional volumetric analysis. The proposed CBIR framework by multi-atlas voting would be the first step toward a knowledge-based support system for quantitative radiological image reading and diagnosis.

  2. Poster — Thur Eve — 59: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy

    International Nuclear Information System (INIS)

    Mallawi, A; Farrell, T; Diamond, K; Wierzbicki, M

    2014-01-01

    Automated atlas-based segmentation has recently been evaluated for use in planning prostate cancer radiotherapy. In the typical approach, the essential step is the selection of an atlas from a database that best matches the target image. This work proposes an atlas selection strategy and evaluates its impact on the final segmentation accuracy. Prostate length (PL), right femoral head diameter (RFHD), and left femoral head diameter (LFHD) were measured in CT images of 20 patients. Each subject was then taken as the target image to which all remaining 19 images were affinely registered. For each pair of registered images, the overlap between prostate and femoral head contours was quantified using the Dice Similarity Coefficient (DSC). Finally, we designed an atlas selection strategy that computed the ratio of PL (prostate segmentation), RFHD (right femur segmentation), and LFHD (left femur segmentation) between the target subject and each subject in the atlas database. Five atlas subjects yielding ratios nearest to one were then selected for further analysis. RFHD and LFHD were excellent parameters for atlas selection, achieving a mean femoral head DSC of 0.82 ± 0.06. PL had a moderate ability to select the most similar prostate, with a mean DSC of 0.63 ± 0.18. The DSC obtained with the proposed selection method were slightly lower than the maximums established using brute force, but this does not include potential improvements expected with deformable registration. Atlas selection based on PL for prostate and femoral diameter for femoral heads provides reasonable segmentation accuracy

  3. Poster — Thur Eve — 59: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Mallawi, A [McMaster University, Medical Physics and Applied Radiation Sciences Department, Hamilton, Ontario (Canada); Farrell, T; Diamond, K; Wierzbicki, M [McMaster University, Medical Physics and Applied Radiation Sciences Department, Hamilton, Ontario (Canada); Juravinski Cancer Centre, Medical Physics Department, Hamilton, Ontario (Canada)

    2014-08-15

    Automated atlas-based segmentation has recently been evaluated for use in planning prostate cancer radiotherapy. In the typical approach, the essential step is the selection of an atlas from a database that best matches the target image. This work proposes an atlas selection strategy and evaluates its impact on the final segmentation accuracy. Prostate length (PL), right femoral head diameter (RFHD), and left femoral head diameter (LFHD) were measured in CT images of 20 patients. Each subject was then taken as the target image to which all remaining 19 images were affinely registered. For each pair of registered images, the overlap between prostate and femoral head contours was quantified using the Dice Similarity Coefficient (DSC). Finally, we designed an atlas selection strategy that computed the ratio of PL (prostate segmentation), RFHD (right femur segmentation), and LFHD (left femur segmentation) between the target subject and each subject in the atlas database. Five atlas subjects yielding ratios nearest to one were then selected for further analysis. RFHD and LFHD were excellent parameters for atlas selection, achieving a mean femoral head DSC of 0.82 ± 0.06. PL had a moderate ability to select the most similar prostate, with a mean DSC of 0.63 ± 0.18. The DSC obtained with the proposed selection method were slightly lower than the maximums established using brute force, but this does not include potential improvements expected with deformable registration. Atlas selection based on PL for prostate and femoral diameter for femoral heads provides reasonable segmentation accuracy.

  4. Time-critical Database Condition Data Handling in the CMS Experiment During the First Data Taking Period

    International Nuclear Information System (INIS)

    Cavallari, Francesca; Gruttola, Michele de; Di Guida, Salvatore; Innocente, Vincenzo; Pfeiffer, Andreas; Govi, Giacomo; Pierro, Antonio

    2011-01-01

    Automatic, synchronous and reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. In this complex infrastructure, monitoring and fast detection of errors is a very challenging task. In this paper, we describe the CMS experiment system to process and populate the Condition Databases and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are automatically collected using centralized jobs or are 'dropped' by the users in dedicated services (offline and online drop-box), which synchronize them and take care of writing them into the online database. Then they are automatically streamed to the offline database, and thus are immediately accessible offline worldwide. The condition data are managed by different users using a wide range of applications. In normal operation the database monitor is used to provide simple timing information and the history of all transactions for all database accounts, and in the case of faults it is used to return simple error messages and more complete debugging information.

  5. Iran atlas of offshore renewable energies

    Energy Technology Data Exchange (ETDEWEB)

    Abbaspour, M.; Rahimi, R. [Sharif University of Technology, School of Mechanical engineering, Azadi Ave., Tehran (Iran)

    2011-01-15

    The aim of the present study is to provide an Atlas of IRAN Offshore Renewable Energy Resources (hereafter called 'the Atlas') to map out wave and tidal resources at a national scale, extending over the area of the Persian Gulf and Sea of Oman. Such an Atlas can provide necessary tools to identify the areas with greatest resource potential and within reach of present technology development. To estimate available tidal energy resources at the site, a two-dimensional tidally driven hydrodynamic numerical model of Persian Gulf was developed using the hydrodynamic model in the MIKE 21 Flow Model (MIKE 21HD), with validation using tidal elevation measurements and tidal stream diamonds from Admiralty charts. The results of the model were used to produce a time series of the tidal stream velocity over the simulation period. Moreover, to assess the potential of the wave energy in this site, a model was developed based on six-hourly data from a third generation ocean wave model (ISWM-Iranian Sea Wave Model) covering the period 1992-2003. To ensure the information provided to the Atlas is managed and maintained most effectively, all the derived marine resource parameters have been captured in a structured database, within a Geographical Information System (GIS), so enabling effective data management, presentation and interrogation. (author)

  6. Prime wires for ATLAS

    CERN Multimedia

    2003-01-01

    In an award ceremony on 3 September, ATLAS honoured the French company Axon Cable for its special coaxial cables, which were purpose-built for the Liquid Argon calorimeter modules. Working for CERN since the 1970s, Axon' Cable received the ATLAS supplier award last week for its contribution to the liquid argon calorimeter cables of ATLAS (LAL/Orsay, France and University of Victoria, Canada), started in 1996. Its two sets of minicoaxial cables, called harnesses "A" and "B", are designed to function in the harsh conditions in the liquid argon (at 90 Kelvin or -183°C) and under extreme radiation (up to several Mrads). The cables are mainly used for the readout of the calorimeters, and are connected to the outside world by 114 signal feedthroughs with 1920 channels each. The signal from the detectors is transmitted directly without any amplification, which imposes tight restrictions on the impedance and on the signal propagation time of the cables. Peter Jenni, ATLAS spokesperson, gives the award for best s...

  7. The ATLAS PanDA Monitoring System and its Evolution

    CERN Document Server

    Klimentov, A; The ATLAS collaboration; Potekhin, M; Wenaus, T

    2011-01-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on PanDA design in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Important to meeting these and other requirements is a comprehensive monitoring system. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. We decided to migrat...

  8. The ATLAS PanDA Monitoring System and its Evolution

    CERN Document Server

    Klimentov, A; The ATLAS collaboration; Potekhin, M; Wenaus, T

    2010-01-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on PanDA design in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Important to meeting these and other requirements is a comprehensive monitoring system. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. We decided to migrat...

  9. Database Description - TP Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available tform for Drug Discovery, Informatics, and Structural Life Science Research Organization of Information and ...3(3):145-54. External Links: Original website information Database maintenance site National Institute of Genetics, Research Organiza...tion of Information and Systems (ROIS) URL of the original website http://www.tanpa

  10. Second NASA Technical Interchange Meeting (TIM): Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box (TTB)

    Science.gov (United States)

    ONeil, D. A.; Mankins, J. C.; Christensen, C. B.; Gresham, E. C.

    2005-01-01

    The Advanced Technology Lifecycle Analysis System (ATLAS), a spreadsheet analysis tool suite, applies parametric equations for sizing and lifecycle cost estimation. Performance, operation, and programmatic data used by the equations come from a Technology Tool Box (TTB) database. In this second TTB Technical Interchange Meeting (TIM), technologists, system model developers, and architecture analysts discussed methods for modeling technology decisions in spreadsheet models, identified specific technology parameters, and defined detailed development requirements. This Conference Publication captures the consensus of the discussions and provides narrative explanations of the tool suite, the database, and applications of ATLAS within NASA s changing environment.

  11. Task Management in the New ATLAS Production System

    CERN Document Server

    De, K; The ATLAS collaboration; Klimentov, A; Potekhin, M; Vaniachine, A

    2013-01-01

    The ATLAS Production System is the top level workflow manager which translates physicists' needs for production level processing into actual workflows executed across about a hundred processing sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. Providing a front-end and a management layer for petascale data processing and analysis, the new Production System contains generic subsystems that can be used in a wider range of applications. The main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, the DEFT subsystem manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. Th...

  12. Task Management in the New ATLAS Production System

    CERN Document Server

    De, K; The ATLAS collaboration; Klimentov, A; Potekhin, M; Vaniachine, A

    2014-01-01

    The ATLAS Production System is the top level workflow manager which translates physicists' needs for production level processing into actual workflows executed across about a hundred processing sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. Providing a front-end and a management layer for petascale data processing and analysis, the new Production System contains generic subsystems that can be used in a wider range of applications. The main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, the DEFT subsystem manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. Th...

  13. The ATLAS Detector Control System

    International Nuclear Information System (INIS)

    Lantzsch, K; Braun, H; Hirschbuehl, D; Kersten, S; Arfaoui, S; Franz, S; Gutzwiller, O; Schlenker, S; Tsarouchas, C A; Mindur, B; Hartert, J; Zimmermann, S; Talyshev, A; Oliveira Damazio, D; Poblaguev, A; Martin, T; Thompson, P D; Caforio, D; Sbarra, C; Hoffmann, D

    2012-01-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC) at CERN, constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub detectors as well as the common experimental infrastructure are controlled and monitored by the Detector Control System (DCS) using a highly distributed system of 140 server machines running the industrial SCADA product PVSS. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, manage the communication with external systems such as the LHC controls, and provide a synchronization mechanism with the ATLAS data acquisition system. Different databases are used to store the online parameters of the experiment, replicate a subset used for physics reconstruction, and store the configuration parameters of the systems. This contribution describes the computing architecture and software tools to handle this complex and highly interconnected control system.

  14. The ATLAS Detector Control System

    Science.gov (United States)

    Lantzsch, K.; Arfaoui, S.; Franz, S.; Gutzwiller, O.; Schlenker, S.; Tsarouchas, C. A.; Mindur, B.; Hartert, J.; Zimmermann, S.; Talyshev, A.; Oliveira Damazio, D.; Poblaguev, A.; Braun, H.; Hirschbuehl, D.; Kersten, S.; Martin, T.; Thompson, P. D.; Caforio, D.; Sbarra, C.; Hoffmann, D.; Nemecek, S.; Robichaud-Veronneau, A.; Wynne, B.; Banas, E.; Hajduk, Z.; Olszowska, J.; Stanecka, E.; Bindi, M.; Polini, A.; Deliyergiyev, M.; Mandic, I.; Ertel, E.; Marques Vinagre, F.; Ribeiro, G.; Santos, H. F.; Barillari, T.; Habring, J.; Huber, J.; Arabidze, G.; Boterenbrood, H.; Hart, R.; Iakovidis, G.; Karakostas, K.; Leontsinis, S.; Mountricha, E.; Ntekas, K.; Filimonov, V.; Khomutnikov, V.; Kovalenko, S.; Grassi, V.; Mitrevski, J.; Phillips, P.; Chekulaev, S.; D'Auria, S.; Nagai, K.; Tartarelli, G. F.; Aielli, G.; Marchese, F.; Lafarguette, P.; Brenner, R.

    2012-12-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC) at CERN, constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub detectors as well as the common experimental infrastructure are controlled and monitored by the Detector Control System (DCS) using a highly distributed system of 140 server machines running the industrial SCADA product PVSS. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, manage the communication with external systems such as the LHC controls, and provide a synchronization mechanism with the ATLAS data acquisition system. Different databases are used to store the online parameters of the experiment, replicate a subset used for physics reconstruction, and store the configuration parameters of the systems. This contribution describes the computing architecture and software tools to handle this complex and highly interconnected control system.

  15. Final environmental statement related to the Atlas Minerals Division, Atlas Corporation, Atlas Uranium Mill (Grand County, Utah)

    International Nuclear Information System (INIS)

    1979-01-01

    The proposed action is the continuation of Source Material License SUA-917 issued to Atlas Corporation for the operation of the Atlas Uranium Mill in Grand County, Utah, near Moab (Docket No. 40-3453). The present mill was designed for an 1100 MT (1200 ton) per day processing rate with 0.25% uranium ore feed. The actual ore processing rate may vary up to 1450 MT (1600 ton) per day if lower grade ores are processed, but the annual production rate of 836 MT (921 tons) U 3 O 8 will not be exceeded. Possible environmental impacts and adverse effects are identified. Conditions for the protection of the environment are set forth before the license can be renewed

  16. Development and Implementation of a Corriedale Ovine Brain Atlas for Use in Atlas-Based Segmentation.

    Directory of Open Access Journals (Sweden)

    Kishan Andre Liyanage

    Full Text Available Segmentation is the process of partitioning an image into subdivisions and can be applied to medical images to isolate anatomical or pathological areas for further analysis. This process can be done manually or automated by the use of image processing computer packages. Atlas-based segmentation automates this process by the use of a pre-labelled template and a registration algorithm. We developed an ovine brain atlas that can be used as a model for neurological conditions such as Parkinson's disease and focal epilepsy. 17 female Corriedale ovine brains were imaged in-vivo in a 1.5T (low-resolution MRI scanner. 13 of the low-resolution images were combined using a template construction algorithm to form a low-resolution template. The template was labelled to form an atlas and tested by comparing manual with atlas-based segmentations against the remaining four low-resolution images. The comparisons were in the form of similarity metrics used in previous segmentation research. Dice Similarity Coefficients were utilised to determine the degree of overlap between eight independent, manual and atlas-based segmentations, with values ranging from 0 (no overlap to 1 (complete overlap. For 7 of these 8 segmented areas, we achieved a Dice Similarity Coefficient of 0.5-0.8. The amygdala was difficult to segment due to its variable location and similar intensity to surrounding tissues resulting in Dice Coefficients of 0.0-0.2. We developed a low resolution ovine brain atlas with eight clinically relevant areas labelled. This brain atlas performed comparably to prior human atlases described in the literature and to intra-observer error providing an atlas that can be used to guide further research using ovine brains as a model and is hosted online for public access.

  17. Non-collision backgrounds in ATLAS

    CERN Document Server

    Gibson, S M; The ATLAS collaboration

    2012-01-01

    The proton-proton collision events recorded by the ATLAS experiment are on top of a background that is due to both collision debris and non-collision components. The latter comprises of three types: beam-induced backgrounds, cosmic particles and detector noise. We present studies that focus on the first two of these. We give a detailed description of beam-related and cosmic backgrounds based on the full 2011 ATLAS data set, and present their rates throughout the whole data-taking period. Studies of correlations between tertiary proton halo and muon backgrounds, as well as, residual pressure and resulting beam-gas events seen in beam-condition monitors will be presented. Results of simulations based on the LHC geometry and its parameters will be presented. They help to better understand the features of beam-induced backgrounds in each ATLAS sub-detector. The studies of beam-induced backgrounds in ATLAS reveal their characteristics and serve as a basis for designing rejection tools that can be applied in physic...

  18. MARS input data for steady-state calculation of ATLAS

    International Nuclear Information System (INIS)

    Park, Hyun Sik; Euh, D. J.; Choi, K. Y.; Kwon, T. S.; Jeong, J. J.; Baek, W. P.

    2004-12-01

    An integral effect test loop for Pressurized Water Reactors (PWRs), the ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), is under construction by Thermal-Hydraulics Safety Research Division in Korea Atomic Energy Research Institute (KAERI). This report includes calculation sheets of the input for the best-estimate system analysis code, the MARS code, based on the ongoing design features of ATLAS. The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400. The contents of this report are divided into three parts: (1) core and reactor vessel, (2) steam generator and steam line, and (3) primary piping, pressurizer and reactor coolant pump. The steady-state analysis for the ATLAS facility will be performed based on these calculation sheets, and its results will be applied to the detailed design of ATLAS. Additionally, the calculation results will contribute to getting optimum test conditions and preliminary operational test conditions for the steady-state and transient experiments

  19. Evolution of Database Replication Technologies for WLCG

    CERN Document Server

    Baranowski, Zbigniew; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-01-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  20. Data Integration for Spatio-Temporal Patterns of Gene Expression of Zebrafish development: the GEMS database

    Directory of Open Access Journals (Sweden)

    Belmamoune Mounia

    2008-06-01

    Full Text Available The Gene Expression Management System (GEMS is a database system for patterns of gene expression. These patterns result from systematic whole-mount fluorescent in situ hybridization studies on zebrafish embryos. GEMS is an integrative platform that addresses one of the important challenges of developmental biology: how to integrate genetic data that underpin morphological changes during embryogenesis. Our motivation to build this system was by the need to be able to organize and compare multiple patterns of gene expression at tissue level. Integration with other developmental and biomolecular databases will further support our understanding of development. The GEMS operates in concert with a database containing a digital atlas of zebrafish embryo; this digital atlas of zebrafish development has been conceived prior to the expansion of the GEMS. The atlas contains 3D volume models of canonical stages of zebrafish development in which in each volume model element is annotated with an anatomical term. These terms are extracted from a formal anatomical ontology, i.e. the Developmental Anatomy Ontology of Zebrafish (DAOZ. In the GEMS, anatomical terms from this ontology together with terms from the Gene Ontology (GO are also used to annotate patterns of gene expression and in this manner providing mechanisms for integration and retrieval . The annotations are the glue for integration of patterns of gene expression in GEMS as well as in other biomolecular databases. At the one hand, zebrafish anatomy terminology allows gene expression data within GEMS to be integrated with phenotypical data in the 3D atlas of zebrafish development. At the other hand, GO terms extend GEMS expression patterns integration to a wide range of bioinformatics resources.

  1. The ATLAS Inner Detector

    CERN Document Server

    Gray, HM; The ATLAS collaboration

    2012-01-01

    The ATLAS experiment at the LHC is equipped with a charged particle tracking system, the Inner Detector, built on three subdetectors, which provide high precision measurements made from a fine detector granularity. The Pixel and microstrip (SCT) subdetectors, which use the silicon technology, are complemented with the Transition Radiation Tracker. Since the LHC startup in 2009, the ATLAS inner tracker has played a central role in many ATLAS physics analyses. Rapid improvements in the calibration and alignment of the detector allowed it to reach nearly the nominal performance in the timespan of a few months. The tracking performance proved to be stable as the LHC luminosity increased by five orders of magnitude during the 2010 proton run, New developments in the offline reconstruction for the 2011 run will improve the tracking performance in high pile-up conditions as well as in highly boosted jets will be discussed.

  2. The ATLAS/TILECAL Detector Control System

    CERN Document Server

    Santos, H; The ATLAS collaboration

    2010-01-01

    Tilecal, the barrel hadronic calorimeter of ATLAS, is a sampling calorimeter where scintillating tiles are embedded in an iron matrix. The tiles are optically coupled to wavelength shifting fibers that carry the optical signal to photo-multipliers. It has a cylindrical shape and is made out of 3 cylinders, the Long Barrel with the LBA and LBC partitions, and the two Extended Barrel with the EBA and EBC partitions. The main task of the Tile calorimeter Detector Control System (DCS) is to enable the coherent and safe operation of the calorimeter. All actions initiated by the operator, as well as all errors, warnings and alarms concerning the hardware of the detector are handled by DCS. The Tile calorimeter DCS controls and monitors mainly the low voltage and high voltage power supply systems, but it is also interfaced with the infrastructure (cooling system and racks), the laser and cesium calibration systems, the data acquisition system, configuration and conditions databases and the detector safety system. In...

  3. HTR fuel modelling with the ATLAS code. Thermal mechanical behaviour and fission product release assessment

    International Nuclear Information System (INIS)

    Guillermier, Pierre; Daniel, Lucile; Gauthier, Laurent

    2009-01-01

    generally in the fuel element (pebble or compact) aims at estimating the source term of the fission products release outside the fuel element in normal operation or accidental conditions. In ATLAS, the transport mechanisms are modelled in a single transport law using effective diffusion coefficients for the fission product species in the different constitutive materials. The Verification and Validation of ATLAS code rests on two main steps: - Testing plan on fuel particle thermal mechanical behaviour has been carried out regarding sensitivity on dimensional parameters and physical properties such as kernel diameter, density and layer thicknesses and pyrocarbon layer anisotropy. The obtained results allow justifying and specifying the design for the manufacture. - Regarding fission product release under core heat-up accident conditions, the IAEA Coordinated Research Project 6 on 'Advanced in HTGR Fuel Technology Development' benchmark is the basis of the ATLAS code verification step. The ATLAS results obtained on IAEA benchmark cases with analytical solutions demonstrate that the models used fit the physical, chemical and mathematical laws. Regarding past irradiation tests and heating tests, ATLAS results show good agreement with the experimental database measurements. Comparison between ATLAS code results with analytical and experimental data allows defining confidence zones where ATLAS code gives accurate results and critical limits. These limits show where R and D efforts on models and material properties are needed to refine laws and models. (author)

  4. A new experiment-agnostic mechanism to persistify and serve the detector geometry of ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00211497; The ATLAS collaboration; Boudreau, Joseph; Vukotic, Ilija

    2017-01-01

    The complex geometry of the whole detector of the ATLAS experiment at LHC is currently stored only in custom online databases, from which it is built on-the-fly on request. Accessing the online geometry guarantees accessing the latest version of the detector description, but requires the setup of the full ATLAS software framework "Athena", which provides the online services and the tools to retrieve the data from the database. This operation is cumbersome and slows down the applications that need to access the geometry. Moreover, all applications that need to access the detector geometry need to be built and run on the same platform as the ATLAS framework, preventing the usage of the actual detector geometry in stand-alone applications. Here we propose a new mechanism to persistify and serve the geometry of HEP experiments. The new mechanism is composed by a new file format and the modules to make use of it. The new file format allows to store the whole detector description locally in a flat file, and it is e...

  5. Scaling up ATLAS production system for the LHC Run 2 and beyond: project ProdSys2

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; García Navarro, José Enrique; Golubkov, Dmitry; Klimentov, Alexei; Maeno, Tadashi; Vaniachine, Alexandre

    2015-01-01

    The Big Data processing needs of the ATLAS experiment grow continuously, as more data and more use cases emerge. For Big Data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, submitted by the ATLAS workload management system (PanDA) and executed on the Grid. Our experience shows that the rate of tasks submission grows exponentially over the years. To scale up the ATLAS production system for new challenges, we started the ProdSys2 project. PanDA has been upgraded with the Job Execution and Definition Interface (JEDI). Patterns in ATLAS data transformation workflows composed of many tasks, provided a scalable production system framework for template definitions of the many-tasks workflows. These workflows are being implemented in the Database Engine for Tasks (DEfT) that generates individual tasks for processing ...

  6. Distributed data collection for a database of radiological image interpretations

    Science.gov (United States)

    Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.

    1997-01-01

    The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.

  7. The Atlas load protection switch

    CERN Document Server

    Davis, H A; Dorr, G; Martínez, M; Gribble, R F; Nielsen, K E; Pierce, D; Parsons, W M

    1999-01-01

    Atlas is a high-energy pulsed-power facility under development to study materials properties and hydrodynamics experiments under extreme conditions. Atlas will implode heavy liner loads (m~45 gm) with a peak current of 27-32 MA delivered in 4 mu s, and is energized by 96, 240 kV Marx generators storing a total of 23 MJ. A key design requirement for Atlas is obtaining useful data for 95601130f all loads installed on the machine. Materials response calculations show current from a prefire can damage the load requiring expensive and time consuming replacement. Therefore, we have incorporated a set of fast-acting mechanical switches in the Atlas design to reduce the probability of a prefire damaging the load. These switches, referred to as the load protection switches, short the load through a very low inductance path during system charge. Once the capacitors have reached full charge, the switches open on a time scale short compared to the bank charge time, allowing current to flow to the load when the trigger pu...

  8. Leveraging the checkpoint-restart technique for optimizing CPU efficiency of ATLAS production applications on opportunistic platforms

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2017-01-01

    Data processing applications of the ATLAS experiment, such as event simulation and reconstruction, spend considerable amount of time in the initialization phase. This phase includes loading a large number of shared libraries, reading detector geometry and condition data from external databases, building a transient representation of the detector geometry and initializing various algorithms and services. In some cases the initialization step can take as long as 10-15 minutes. Such slow initialization, being inherently serial, has a significant negative impact on overall CPU efficiency of the production job, especially when the job is executed on opportunistic, often short-lived, resources such as commercial clouds or volunteer computing. In order to improve this situation, we can take advantage of the fact that ATLAS runs large numbers of production jobs with similar configuration parameters (e.g. jobs within the same production task). This allows us to checkpoint one job at the end of its configuration step a...

  9. Replacement of benthic communities in two Neoproterozoic-Cambrian subtropical-to-temperate rift basins, High Atlas and Anti-Atlas, Morocco

    Science.gov (United States)

    Clausen, Sébastien; Álvaro, J. Javier; Zamora, Samuel

    2014-10-01

    The ‘Cambrian explosion’ is often introduced as a major shift in benthic marine communities with a coeval decline of microbial consortia related to the diversification of metazoans and development of bioturbation (‘Agronomic Revolution’). Successive community replacements have been reported along with ecosystem diversification and increase in guild complexity from Neoproterozoic to Cambrian times. This process is recorded worldwide but with regional diachroneities, some of them directly controlled by the geodynamic conditions of sedimentary basins. The southern High Atlas and Anti-Atlas of Morocco record development of two rifts, Tonian (?) - early Cryogenian and latest Ediacarian-Cambrian in age, separated by the onset of the Pan-African Orogeny. This tectonically controlled, regional geodynamic change played a primary control on pattern and timing of benthic ecosystem replacements. Benthic communities include microbial consortia, archaeocyathan-thromboid reefal complexes, chancelloriid-echinoderm-sponge meadows, and deeper offshore echinoderm-dominated communities. Microbial consortia appeared in deeper parts of the Tonian (?) - early Cryogenian fluvio-deltaic progradational rift sequences, lacustrine environments of the Ediacaran Volcanic Atlasic Chain (Ouarzazate Supergroup) and the Ediacaran-Cambrian boundary interval, characterized by the peritidal-dominated Tifnout Member (Adoudou Formation). They persisted and were largely significant until Cambrian Age 3, as previous restricted marine conditions precluded the immigration of shelly metazoans in the relatively shallow epeiric parts of the Cambrian Atlas Rift. Successive Cambrian benthic communities were replaced as a result of distinct hydrodynamic and substrate conditions, which allow identification of biotic (e.g., antagonistic relationships between microbial consortia and echinoderms, and taphonomic feedback patterns in chancelloriid-echinoderm-sponge meadows) and abiotic (e.g., rifting

  10. ATLAS BigPanDA Monitoring and Its Evolution

    CERN Document Server

    Wenaus, Torre; The ATLAS collaboration; Korchuganova, Tatiana

    2016-01-01

    BigPanDA is the latest generation of the monitoring system for the Production and Distributed Analysis (PanDA) system. The BigPanDA monitor is a core component of PanDA and also serves the monitoring needs of the new ATLAS Production System Prodsys-2. BigPanDA has been developed to serve the growing computation needs of the ATLAS Experiment and the wider applications of PanDA beyond ATLAS. Through a system-wide job database, the BigPanDA monitor provides a comprehensive and coherent view of the tasks and jobs executed by the system, from high level summaries to detailed drill-down job diagnostics. The system has been in production and has remained in continuous development since mid 2014, today effectively managing more than 2 million jobs per day distributed over 150 computing centers worldwide. BigPanDA also delivers web-based analytics and system state views to groups of users including distributed computing systems operators, shifters, physicist end-users, computing managers and accounting services. Provi...

  11. Glance Information System for ATLAS Management

    CERN Document Server

    De Oliveira Fernandes Moraes, L; The ATLAS collaboration; Ramos De Azevedo Evora, LH; Karam, K; Fink Grael, F; Pommes, K; Nessi, M; Cirilli, M

    2011-01-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group of people and the system used was not designed to handle new requirements easily. Moreover, developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Besides that, the maintenance has to be an easy task considering the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the dat...

  12. An Oracle-based event index for ATLAS

    Science.gov (United States)

    Gallas, E. J.; Dimitrov, G.; Vasileva, P.; Baranowski, Z.; Canali, L.; Dumitru, A.; Formica, A.; ATLAS Collaboration

    2017-10-01

    The ATLAS Eventlndex System has amassed a set of key quantities for a large number of ATLAS events into a Hadoop based infrastructure for the purpose of providing the experiment with a number of event-wise services. Collecting this data in one place provides the opportunity to investigate various storage formats and technologies and assess which best serve the various use cases as well as consider what other benefits alternative storage systems provide. In this presentation we describe how the data are imported into an Oracle RDBMS (relational database management system), the services we have built based on this architecture, and our experience with it. We’ve indexed about 26 billion real data events thus far and have designed the system to accommodate future data which has expected rates of 5 and 20 billion events per year. We have found this system offers outstanding performance for some fundamental use cases. In addition, profiting from the co-location of this data with other complementary metadata in ATLAS, the system has been easily extended to perform essential assessments of data integrity and completeness and to identify event duplication, including at what step in processing the duplication occurred.

  13. Time-critical database condition data handling in the CMS experiment during the first data taking period

    CERN Document Server

    Di Guida, Salvatore

    2011-01-01

    Automatic, synchronous and of course reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. In this complex infrastructure, monitoring and fast detection of errors is a very challenging task. To recover the system and to put it in a safe state requires spotting a faulty situation within strict time constraints. We will describe here the system put in place in the CMS experiment to automate the processes that populate centrally the Condition Databases and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are automatically collected using centralized jobs or are ``dropped'' by the users in dedicate services (offline and online drop-box), which synchronize them and take care of writing them into the online database. Then they are automatically streamed to the offline database, and thus are immediately acce...

  14. ATLAS Muon Drift Tube Electronics

    CERN Document Server

    Arai, Y; Beretta, M; Boterenbrood, H; Brandenburg, G W; Ceradini, F; Chapman, J W; Dai, T; Ferretti, C; Fries, T; Gregory, J; Guimarães da Costa, J; Harder, S; Hazen, E; Huth, J; Jansweijer, P P M; Kirsch, L E; König, A C; Lanza, A; Mikenberg, G; Oliver, J; Posch, C; Richter, R; Riegler, W; Spiriti, E; Taylor, F E; Vermeulen, J; Wadsworth, B; Wijnen, T A M

    2008-01-01

    This paper describes the electronics used for the ATLAS monitored drift tube (MDT) chambers. These chambers are the main component of the precision tracking system in the ATLAS muon spectrometer. The MDT detector system consists of 1,150 chambers containing a total of 354,000 drift tubes. It is capable of measuring the sagitta of muon tracks to an accuracy of 60 microns, which corresponds to a momentum accuracy of about 10% at pT = 1 TeV. The design and performance of the MDT readout electronics as well as the electronics for controlling, monitoring and powering the detector will be discussed. These electronics have been extensively tested under simulated running conditions and have undergone radiation testing certifying them for more than 10 years of LHC operation. They are now installed on the ATLAS detector and are operating during cosmic ray commissioning runs.

  15. WE-E-213CD-02: Gaussian Weighted Multi-Atlas Based Segmentation for Head and Neck Radiotherapy Planning.

    Science.gov (United States)

    Peroni, M; Sharp, G C; Golland, P; Baroni, G

    2012-06-01

    To develop a multi-atlas segmentation strategy for IMRT head and neck therapy planning. The method was tested on thirty-one head and neck simulation CTs, without demographic or pathology pre-clustering. We compare Fixed Number (FN) and Thresholding (TH) selection (based on normalized mutual information ranking) of the atlases to be included for current patient segmentation. Next step is a pairwise demons Deformable Registration (DR) onto current patient CT. DR was extended to automatically compensate for patient different field of view. Propagated labels are combined according to a Gaussian Weighted (GW) fusion rule, adapted to poor soft tissues contrast. Agreement with manual segmentation was quantified in terms of Dice Similarity Coefficient (DSC). Selection methods, number of atlases used, as well as GW, average and majority voting fusion were discriminated by means of Friedman Test (a=5%). Experimental tuning of the algorithm parameters was performed on five patients, deriving an optimal configuration for each structure. DSC reduction was not significant when ten or more atlases are selected, whereas DSC for single most similar atlas selection is 10% lower in median. DSC of FN selection rule were significantly higher for most structures. Tubular structures may benefit from computing average contour rather than looking at the singular voxel contribution, whereas the best performing strategy for all other structures was GW. When half database is selected, final median DSC were 0.86, 0.80, 0.51, 0.81, 0.69 and 0.79 for mandible, spine, optical nerves, eyes, parotids and brainstem respectively. We developed an efficient algorithm for multiatlas based segmentation of planning CT volumes, based on DR and GW. FN selection of database atlases is foreseen to increase computational efficiency. The absence of clinical pre-clustering and specific imaging protocol on database subjects makes the results closer to real clinical application. "Progetto Roberto Rocca" funded by

  16. First experience in operating the population of the condition databases for the CMS experiment

    International Nuclear Information System (INIS)

    De Gruttola, Michele; Paolucci, Pierluigi; Di Guida, Salvatore; Glege, Frank; Innocente, Vincenzo; Schlatter, Dieter; Futyan, David; Govi, Giacomo; Pierro, Antonio

    2010-01-01

    Reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. We will describe here the system put in place in the CMS experiment to populate the database and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The system, designed for high flexibility to cope with very different data sources, uses POOL-ORA technology in order to store data in an object format that best matches the object oriented paradigm for C++ programming language used in the CMS offline software. In order to ensure consistency among the various subdetectors, a dedicated package, PopCon (Populator of Condition Objects), is used to store data online. The data are then automatically streamed to the offline database hence immediately accessible offline worldwide. This mechanism was intensively used during 2008 in the test-runs with cosmic rays. The experience of this first months of operation will be discussed in detail.

  17. A Universal Genome Array and Transcriptome Atlas for Brachypodium Distachyon

    Energy Technology Data Exchange (ETDEWEB)

    Mockler, Todd [Oregon State Univ., Corvallis, OR (United States)

    2017-04-17

    Brachypodium distachyon is the premier experimental model grass platform and is related to candidate feedstock crops for bioethanol production. Based on the DOE-JGI Brachypodium Bd21 genome sequence and annotation we designed a whole genome DNA microarray platform. The quality of this array platform is unprecedented due to the exceptional quality of the Brachypodium genome assembly and annotation and the stringent probe selection criteria employed in the design. We worked with members of the international community and the bioinformatics/design team at Affymetrix at all stages in the development of the array. We used the Brachypodium arrays to interrogate the transcriptomes of plants grown in a variety of environmental conditions including diurnal and circadian light/temperature conditions and under a variety of environmental conditions. We examined the transciptional responses of Brachypodium seedlings subjected to various abiotic stresses including heat, cold, salt, and high intensity light. We generated a gene expression atlas representing various organs and developmental stages. The results of these efforts including all microarray datasets are published and available at online public databases.

  18. The SysteMHC Atlas project

    DEFF Research Database (Denmark)

    Shao, Wenguang; Pedrioli, Patrick G. A.; Wolski, Witold

    2018-01-01

    consisting of consensus spectra calculated from repeat measurements of the same peptide sequence, and links to other proteomics and immunology databases. The SysteMHC Atlas project was created and will be further expanded using a uniform and open computational pipeline that controls the quality of peptide......-scale generation of immunopeptidomic datasets and recent developments in MS-based peptide analysis technologies now support the generation of the required data. Importantly, the availability of diverse immunopeptidomic datasets has resulted in an increasing need to standardize, store and exchange this type of data...

  19. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    International Nuclear Information System (INIS)

    Valassi, A; Kalkhof, A; Bartoldus, R; Salnikov, A; Wache, M

    2011-01-01

    The CORAL software is widely used at CERN by the LHC experiments to access the data they store on relational databases, such as Oracle. Two new components have recently been added to implement a model involving a middle tier 'CORAL server' deployed close to the database and a tree of 'CORAL server proxies', providing data caching and multiplexing, deployed close to the client. A first implementation of the two new components, released in the summer 2009, is now deployed in the ATLAS online system to read the data needed by the High Level Trigger, allowing the configuration of a farm of several thousand processes. This paper reviews the architecture of the software, its development status and its usage in ATLAS.

  20. Single Event Upset Studies Using the ATLAS SCT

    CERN Document Server

    Dafinca, A; The ATLAS collaboration; Weidberg, A R

    2014-01-01

    Single Event Upsets (SEU) are expected to occur during high luminosity running of the ATLAS SemiConductor Tracker (SCT). The SEU cross sections were measured in pion beams with momenta in the range 200 to 465 MeV/c and proton test beams at 24 GeV/c but the extrapolation to LHC conditions is non-trivial because of the range of particle types and momenta. The SEUs studied occur in the p-i-n photodiode and the registers in the ABCD chip. Comparisons between predicted SEU rates and those measured from ATLAS data are presented. The implications for ATLAS operation are discussed

  1. Bridging neuroanatomy, neuroradiology and neurology: three-dimensional interactive atlas of neurological disorders.

    Science.gov (United States)

    Nowinski, W L; Chua, B C

    2013-06-01

    Understanding brain pathology along with the underlying neuroanatomy and the resulting neurological deficits is of vital importance in medical education and clinical practice. To facilitate and expedite this understanding, we created a three-dimensional (3D) interactive atlas of neurological disorders providing the correspondence between a brain lesion and the resulting disorder(s). The atlas contains a 3D highly parcellated atlas of normal neuroanatomy along with a brain pathology database. Normal neuroanatomy is divided into about 2,300 components, including the cerebrum, cerebellum, brainstem, spinal cord, arteries, veins, dural sinuses, tracts, cranial nerves (CN), white matter, deep gray nuclei, ventricles, visual system, muscles, glands and cervical vertebrae (C1-C5). The brain pathology database contains 144 focal and distributed synthesized lesions (70 vascular, 36 CN-related, and 38 regional anatomy-related), each lesion labeled with the resulting disorder and associated signs, symptoms, and/or syndromes compiled from materials reported in the literature. The initial view of each lesion was preset in terms of its location and size, surrounding surface and sectional (magnetic resonance) neuroanatomy, and labeling of lesion and neuroanatomy. In addition, a glossary of neurological disorders was compiled and for each disorder materials from textbooks were included to provide neurological description. This atlas of neurological disorders is potentially useful to a wide variety of users ranging from medical students, residents and nurses to general practitioners, neuroanatomists, neuroradiologists and neurologists, as it contains both normal (surface and sectional) brain anatomy and pathology correlated with neurological disorders presented in a visual and interactive way.

  2. A Conditions Data Management System for HEP Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Laycock, P. J. [CERN; Dykstra, D. [Fermilab; Formica, A. [Saclay; Govi, G. [Fermilab; Pfeiffer, A. [CERN; Roe, S. [CERN; Sipos, R. [Eotvos U.

    2017-01-01

    Conditions data infrastructure for both ATLAS and CMS have to deal with the management of several Terabytes of data. Distributed computing access to this data requires particular care and attention to manage request-rates of up to several tens of kHz. Thanks to the large overlap in use cases and requirements, ATLAS and CMS have worked towards a common solution for conditions data management with the aim of using this design for data-taking in Run 3. In the meantime other experiments, including NA62, have expressed an interest in this cross- experiment initiative. For experiments with a smaller payload volume and complexity, there is particular interest in simplifying the payload storage. The conditions data management model is implemented in a small set of relational database tables. A prototype access toolkit consisting of an intermediate web server has been implemented, using standard technologies available in the Java community. Access is provided through a set of REST services for which the API has been described in a generic way using standard Open API specications, implemented in Swagger. Such a solution allows the automatic generation of client code and server stubs and further allows changes in the backend technology transparently. An important advantage of using a REST API for conditions access is the possibility of caching identical URLs, addressing one of the biggest challenges that large distributed computing solutions impose on conditions data access, avoiding direct DB access by means of standard web proxy solutions.

  3. Methodology of high-resolution photography for mural condition database

    Science.gov (United States)

    Higuchi, R.; Suzuki, T.; Shibata, M.; Taniguchi, Y.

    2015-08-01

    Digital documentation is one of the most useful techniques to record the condition of cultural heritage. Recently, high-resolution images become increasingly useful because it is possible to show general views of mural paintings and also detailed mural conditions in a single image. As mural paintings are damaged by environmental stresses, it is necessary to record the details of painting condition on high-resolution base maps. Unfortunately, the cost of high-resolution photography and the difficulty of operating its instruments and software have commonly been an impediment for researchers and conservators. However, the recent development of graphic software makes its operation simpler and less expensive. In this paper, we suggest a new approach to make digital heritage inventories without special instruments, based on our recent our research project in Üzümlü church in Cappadocia, Turkey. This method enables us to achieve a high-resolution image database with low costs, short time, and limited human resources.

  4. ATLAS Muon Drift Tube Electronics

    Energy Technology Data Exchange (ETDEWEB)

    Arai, Y [KEK, High Energy Accelerator Research Organisation, Tsukuba (Japan); Ball, B; Chapman, J W; Dai, T; Ferretti, C; Gregory, J [University of Michigan, Department of Physics, Ann Arbor, MI (United States); Beretta, M [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Boterenbrood, H; Jansweijer, P P M [Nikhef National Institute for Subatomic Physics, Amsterdam (Netherlands); Brandenburg, G W; Fries, T; Costa, J Guimaraes da; Harder, S; Huth, J [Harvard University, Laboratory for Particle Physics and Cosmology, Cambridge, MA (United States); Ceradini, F [INFN Roma Tre and Universita Roma Tre, Dipartimento di Fisica, Roma (Italy); Hazen, E [Boston University, Physics Department, Boston, MA (United States); Kirsch, L E [Brandeis University, Department of Physics, Waltham, MA (United States); Koenig, A C [Radboud University Nijmegen/Nikhef, Dept. of Exp. High Energy Physics, Nijmegen (Netherlands); Lanza, A [INFN Pavia, Pavia (Italy); Mikenberg, G [Weizmann Institute of Science, Department of Particle Physics, Rehovot (Israel)], E-mail: brandenburg@physics.harvard.edu (and others)

    2008-09-15

    This paper describes the electronics used for the ATLAS monitored drift tube (MDT) chambers. These chambers are the main component of the precision tracking system in the ATLAS muon spectrometer. The MDT detector system consists of 1,150 chambers containing a total of 354,000 drift tubes. It is capable of measuring the sagitta of muon tracks to an accuracy of 60 {mu}m, which corresponds to a momentum accuracy of about 10% at p{sub T}= 1 TeV. The design and performance of the MDT readout electronics as well as the electronics for controlling, monitoring and powering the detector will be discussed. These electronics have been extensively tested under simulated running conditions and have undergone radiation testing certifying them for more than 10 years of LHC operation. They are now installed on the ATLAS detector and are operating during cosmic ray commissioning runs.

  5. Three-dimensional interactive atlas of cranial nerve-related disorders.

    Science.gov (United States)

    Nowinski, W L; Chua, B C

    2013-06-01

    Anatomical knowledge of the cranial nerves (CN) is fundamental in education, research and clinical practice. Moreover, understanding CN-related pathology with underlying neuroanatomy and the resulting neurological deficits is of vital importance. To facilitate CN knowledge anatomy and pathology understanding, we created an atlas of CN-related disorders, which is a three-dimensional (3D) interactive tool correlating CN pathology with the underlying surface and sectional neuroanatomy as well as the resulting neurological deficits. A computer platform was developed with: 1) anatomy browser along with the normal brain atlas (built earlier); 2) simulator of CN lesions; 3) tools to label CN-related pathology; and 4) CN pathology database with lesions and disorders, and the resulting signs, symptoms and/or syndromes. The normal neuroanatomy comprises about 2,300 3D components subdivided into modules. Cranial nerves contain more than 600 components: all 12 pairs of cranial nerves (CN I - CN XII) and the brainstem CN nuclei. The CN pathology database was populated with 36 lesions compiled from clinical textbooks. The initial view of each disorder was preset in terms of lesion location and size, surrounding surface and sectional neuroanatomy, and disorder and neuroanatomy labeling. Moreover, path selection from a CN nucleus to a targeted organ further enhances pathology-anatomy relationships. This atlas of CN-related disorders is potentially useful to a wide variety of users ranging from medical students and residents to general practitioners, neuroradiologists and neurologists, as it contains both normal brain anatomy and CN-related pathology correlated with neurological disorders presented in a visual and interactive way.

  6. Generating patient specific pseudo-CT of the head from MR using atlas-based regression

    International Nuclear Information System (INIS)

    Sjölund, J; Forsberg, D; Andersson, M; Knutsson, H

    2015-01-01

    Radiotherapy planning and attenuation correction of PET images require simulation of radiation transport. The necessary physical properties are typically derived from computed tomography (CT) images, but in some cases, including stereotactic neurosurgery and combined PET/MR imaging, only magnetic resonance (MR) images are available. With these applications in mind, we describe how a realistic, patient-specific, pseudo-CT of the head can be derived from anatomical MR images. We refer to the method as atlas-based regression, because of its similarity to atlas-based segmentation. Given a target MR and an atlas database comprising MR and CT pairs, atlas-based regression works by registering each atlas MR to the target MR, applying the resulting displacement fields to the corresponding atlas CTs and, finally, fusing the deformed atlas CTs into a single pseudo-CT. We use a deformable registration algorithm known as the Morphon and augment it with a certainty mask that allows a tailoring of the influence certain regions are allowed to have on the registration. Moreover, we propose a novel method of fusion, wherein the collection of deformed CTs is iteratively registered to their joint mean and find that the resulting mean CT becomes more similar to the target CT. However, the voxelwise median provided even better results; at least as good as earlier work that required special MR imaging techniques. This makes atlas-based regression a good candidate for clinical use. (paper)

  7. Poster - 32: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Mallawi, Abrar; Farrell, TomTom; Diamond, Kevin-Ross; Wierzbicki, Marcin [McMaster University / National Guard Health Affairs, Radiation Oncology Department, Riyadh, Saudi Arabia, McMaster University / Juravinski Cancer Centre, McMaster University / Juravinski Cancer Centre, McMaster University / Juravinski Cancer Centre (Saudi Arabia)

    2016-08-15

    Atlas based-segmentation has recently been evaluated for use in prostate radiotherapy. In a typical approach, the essential step is the selection of an atlas from a database that the best matches of the target image. This work proposes an atlas selection strategy and evaluate it impacts on final segmentation accuracy. Several anatomical parameters were measured to indicate the overall prostate and body shape, all of these measurements obtained on CT images. A brute force procedure was first performed for a training dataset of 20 patients using image registration to pair subject with similar contours; each subject was served as a target image to which all reaming 19 images were affinity registered. The overlap between the prostate and femoral heads was quantified for each pair using the Dice Similarity Coefficient (DSC). Finally, an atlas selection procedure was designed; relying on the computation of a similarity score defined as a weighted sum of differences between the target and atlas subject anatomical measurement. The algorithm ability to predict the most similar atlas was excellent, achieving mean DSCs of 0.78 ± 0.07 and 0.90 ± 0.02 for the CTV and either femoral head. The proposed atlas selection yielded 0.72 ± 0.11 and 0.87 ± 0.03 for CTV and either femoral head. The DSC obtained with the proposed selection method were slightly lower than the maximum established using brute force, but this does not include potential improvements expected with deformable registration. The proposed atlas selection method provides reasonable segmentation accuracy.

  8. Poster - 32: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy

    International Nuclear Information System (INIS)

    Mallawi, Abrar; Farrell, TomTom; Diamond, Kevin-Ross; Wierzbicki, Marcin

    2016-01-01

    Atlas based-segmentation has recently been evaluated for use in prostate radiotherapy. In a typical approach, the essential step is the selection of an atlas from a database that the best matches of the target image. This work proposes an atlas selection strategy and evaluate it impacts on final segmentation accuracy. Several anatomical parameters were measured to indicate the overall prostate and body shape, all of these measurements obtained on CT images. A brute force procedure was first performed for a training dataset of 20 patients using image registration to pair subject with similar contours; each subject was served as a target image to which all reaming 19 images were affinity registered. The overlap between the prostate and femoral heads was quantified for each pair using the Dice Similarity Coefficient (DSC). Finally, an atlas selection procedure was designed; relying on the computation of a similarity score defined as a weighted sum of differences between the target and atlas subject anatomical measurement. The algorithm ability to predict the most similar atlas was excellent, achieving mean DSCs of 0.78 ± 0.07 and 0.90 ± 0.02 for the CTV and either femoral head. The proposed atlas selection yielded 0.72 ± 0.11 and 0.87 ± 0.03 for CTV and either femoral head. The DSC obtained with the proposed selection method were slightly lower than the maximum established using brute force, but this does not include potential improvements expected with deformable registration. The proposed atlas selection method provides reasonable segmentation accuracy.

  9. Vegetation relevés and soil measurements in the Netherlands: the Ecological Conditions Database (EC)

    NARCIS (Netherlands)

    Wamelink, G.W.W.; Adrichem, van M.H.C.; Dobben, van H.F.; Frissel, J.Y.; Held, den M.E.; Joosten, V.; Malinowska, A.H.; Slim, P.A.; Wegman, R.M.A.

    2012-01-01

    Since its establishment around 1990, the Ecological Conditions Database (EC; GIVD ID EU-00-006) has been accumulating vegetation relevés from the Netherlands, each accompanied by at least one abiotic soil measurement (e.g. pH or nutrient availability). On 1-1-2010, the database contained 8,229

  10. Advances in ATLAS@Home towards a major ATLAS computing resource

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2018-01-01

    The volunteer computing project ATLAS@Home has been providing a stable computing resource for the ATLAS experiment since 2013. It has recently undergone some significant developments and as a result has become one of the largest resources contributing to ATLAS computing, by expanding its scope beyond traditional volunteers and into exploitation of idle computing power in ATLAS data centres. Removing the need for virtualization on Linux and instead using container technology has made the entry barrier significantly lower data centre participation and in this paper, we describe the implementation and results of this change. We also present other recent changes and improvements in the project. In early 2017 the ATLAS@Home project was merged into a combined LHC@Home platform, providing a unified gateway to all CERN-related volunteer computing projects. The ATLAS Event Service shifts data processing from file-level to event-level and we describe how ATLAS@Home was incorporated into this new paradigm. The finishing...

  11. Web System for Data Quality Assessment of Tile Calorimeter During the ATLAS Operation

    International Nuclear Information System (INIS)

    Maidantchik, C; Ferreira, F; Grael, F; Sivolella, A; Balabram, L

    2011-01-01

    TileCal, the barrel hadronic calorimeter of the ATLAS experiment, gathers almost about 10,000 electronic channels. The supervision of the detector behavior is very important in order to ensure proper operation. Collaborators perform analysis over reconstructed data of calibration runs for giving detailed considerations about the equipment status. During the commissioning period, our group has developed seven web systems to support the data quality (DQ) assessment task. Each system covers a part of the process by providing information on the latest runs, displaying the DQ status from the monitoring framework, giving details about power supplies operation, presenting the generated plots and storing the validation outcomes, assisting to write logbook entries, creating and submitting the bad channels list to the conditions database and publishing the equipment performance history. The ATLAS operation increases amount of data that are retrieved, processed and stored by the web systems. In order to accomplish the new requirements, an optimized data model was designed to reduce the number of needed queries. The web systems were reassembled in a unique system in order to provide an integrated view of the validating process. The server load was minimized by using asynchronous requests from the browser.

  12. Task management in the new ATLAS production system

    International Nuclear Information System (INIS)

    De, K; Golubkov, D; Klimentov, A; Potekhin, M; Vaniachine, A

    2014-01-01

    This document describes the design of the new Production System of the ATLAS experiment at the LHC [1]. The Production System is the top level workflow manager which translates physicists' needs for production level processing and analysis into actual workflows executed across over a hundred Grid sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. In the new design, the main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, DEFT manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. The JEDI component then dynamically translates the task definitions from DEFT into actual workload jobs executed in the PanDA Workload Management System [2]. We present the requirements, design parameters, basics of the object model and concrete solutions utilized in building the new Production System and its components.

  13. Persistent Data Layout and Infrastructure for Efficient Selective Retrieval of Event Data in ATLAS

    CERN Document Server

    INSPIRE-00084279; Malon, David

    2011-01-01

    The ATLAS detector at CERN has completed its first full year of recording collisions at 7 TeV, resulting in billions of events and petabytes of data. At these scales, physicists must have the capability to read only the data of interest to their analyses, with the importance of efficient selective access increasing as data taking continues. ATLAS has developed a sophisticated event-level metadata infrastructure and supporting I/O framework allowing event selections by explicit specification, by back navigation, and by selection queries to a TAG database via an integrated web interface. These systems and their performance have been reported on elsewhere. The ultimate success of such a system, however, depends significantly upon the efficiency of selective event retrieval. Supporting such retrieval can be challenging, as ATLAS stores its event data in column-wise orientation using ROOT trees for a number of reasons, including compression considerations, histogramming use cases, and more. For 2011 data, ATLAS wi...

  14. Single Event Upset Studies Using the ATLAS SCT

    CERN Document Server

    Weidberg, A R; The ATLAS collaboration

    2013-01-01

    Single Event Upsets (SEU) are expected to occur during high luminosity running of the ATLAS SemiConductor Tracker (SCT). The SEU cross sections were measured in pion beams with momenta in the range 200 to 465 MeV/c and proton test beams at 24 GeV/c but the extrapolation to LHC conditions is non-trivial because of the range of particle types and momenta. The SEUs studied occur in the \\emph{p-i-n} photodiode and the registers in the ABCD chip. Comparisons between predicted SEU rates and those measured from ATLAS data are presented. The implications for ATLAS operation are discussed.

  15. Alignment data streams for the ATLAS inner detector

    International Nuclear Information System (INIS)

    Pinto, B; Amorim, A; Pereira, P; Elsing, M; Hawkings, R; Schieck, J; Garcia, S; Schaffer, A; Ma, H; Anjos, A

    2008-01-01

    The ATLAS experiment uses a complex trigger strategy to be able to reduce the Event Filter rate output, down to a level that allows the storage and processing of these data. These concepts are described in the ATLAS Computing Model which embraces Grid paradigm. The output coming from the Event Filter consists of four main streams: physical stream, express stream, calibration stream, and diagnostic stream. The calibration stream will be transferred to the Tier-0 facilities that will provide the prompt reconstruction of this stream with a minimum latency of 8 hours, producing calibration constants of sufficient quality to allow a first-pass processing. The Inner Detector community is developing and testing an independent common calibration stream selected at the Event Filter after track reconstruction. It is composed of raw data, in byte-stream format, contained in Readout Buffers (ROBs) with hit information of the selected tracks, and it will be used to derive and update a set of calibration and alignment constants. This option was selected because it makes use of the Byte Stream Converter infrastructure and possibly gives better bandwidth usage and storage optimization. Processing is done using specialized algorithms running in the Athena framework in dedicated Tier-0 resources, and the alignment constants will be stored and distributed using the COOL conditions database infrastructure. This work is addressing in particular the alignment requirements, the needs for track and hit selection, and the performance issues

  16. Alignment data stream for the ATLAS inner detector

    International Nuclear Information System (INIS)

    Pinto, B

    2010-01-01

    The ATLAS experiment uses a complex trigger strategy to be able to achieve the necessary Event Filter rate output, making possible to optimize the storage and processing needs of these data. These needs are described in the ATLAS Computing Model, which embraces Grid concepts. The output coming from the Event Filter will consist of three main streams: a primary stream, the express stream and the calibration stream. The calibration stream will be transferred to the Tier-0 facilities which will allow the prompt reconstruction of this stream with an admissible latency of 8 hours, producing calibration constants of sufficient quality to permit a first-pass processing. An independent calibration stream is developed and tested, which selects tracks at the level-2 trigger (LVL2) after the reconstruction. The stream is composed of raw data, in byte-stream format, and contains only information of the relevant parts of the detector, in particular the hit information of the selected tracks. This leads to a significantly improved bandwidth usage and storage capability. The stream will be used to derive and update the calibration and alignment constants if necessary every 24h. Processing is done using specialized algorithms running in Athena framework in dedicated Tier-0 resources, and the alignment constants will be stored and distributed using the COOL conditions database infrastructure. The work is addressing in particular the alignment requirements, the needs for track and hit selection, timing and bandwidth issues.

  17. The PowerAtlas: a power and sample size atlas for microarray experimental design and research

    Directory of Open Access Journals (Sweden)

    Wang Jelai

    2006-02-01

    Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.

  18. Analysis facility infrastructure (Tier-3) for ATLAS experiment

    International Nuclear Information System (INIS)

    Gonzalez de la Hoz, S.; March, L.; Ros, E.; Sanchez, J.; Amoros, G.; Fassi, F.; Fernandez, A.; Kaci, M.; Lamas, A.; Salt, J.

    2008-01-01

    In the ATLAS computing model the tiered hierarchy ranged from the Tier-0 (CERN) down to desktops or workstations (Tier-3). The focus on defining the roles of each tiered component has evolved with the initial emphasis on the Tier-0 and Tier-1 definition and roles. The various LHC (Large Hadron Collider) projects, including ATLAS, then evolved the tiered hierarchy to include Tier-2's (Regional centers) as part of their projects. Tier-3 centres, on the other hand, have been defined as whatever an institution could construct to support their Physics goals using institutional and otherwise leveraged resources and therefore have not been considered to be part of the official ATLAS computing resources. However, Tier-3 centres are going to exist and will have implications on how the computing model should support ATLAS physicists. Tier-3 users will want to access LHC data and simulations and will want to enable their resources to support their analysis and simulation work. This document will define how IFIC (Instituto de Fisica Corpuscular de Valencia), after discussing with the ATLAS Tier-3 task force, should interact with the ATLAS computing model, detail the conditions under which Tier-3 centres can expect some level of support and set reasonable expectations for the scope and support of ATLAS Tier-3 sites. (orig.)

  19. The effect of morphometric atlas selection on multi-atlas-based automatic brachial plexus segmentation

    International Nuclear Information System (INIS)

    Van de Velde, Joris; Wouters, Johan; Vercauteren, Tom; De Gersem, Werner; Achten, Eric; De Neve, Wilfried; Van Hoof, Tom

    2015-01-01

    The present study aimed to measure the effect of a morphometric atlas selection strategy on the accuracy of multi-atlas-based BP autosegmentation using the commercially available software package ADMIRE® and to determine the optimal number of selected atlases to use. Autosegmentation accuracy was measured by comparing all generated automatic BP segmentations with anatomically validated gold standard segmentations that were developed using cadavers. Twelve cadaver computed tomography (CT) atlases were included in the study. One atlas was selected as a patient in ADMIRE®, and multi-atlas-based BP autosegmentation was first performed with a group of morphometrically preselected atlases. In this group, the atlases were selected on the basis of similarity in the shoulder protraction position with the patient. The number of selected atlases used started at two and increased up to eight. Subsequently, a group of randomly chosen, non-selected atlases were taken. In this second group, every possible combination of 2 to 8 random atlases was used for multi-atlas-based BP autosegmentation. For both groups, the average Dice similarity coefficient (DSC), Jaccard index (JI) and Inclusion index (INI) were calculated, measuring the similarity of the generated automatic BP segmentations and the gold standard segmentation. Similarity indices of both groups were compared using an independent sample t-test, and the optimal number of selected atlases was investigated using an equivalence trial. For each number of atlases, average similarity indices of the morphometrically selected atlas group were significantly higher than the random group (p < 0,05). In this study, the highest similarity indices were achieved using multi-atlas autosegmentation with 6 selected atlases (average DSC = 0,598; average JI = 0,434; average INI = 0,733). Morphometric atlas selection on the basis of the protraction position of the patient significantly improves multi-atlas-based BP autosegmentation accuracy

  20. Advanced Technology Lifecycle Analysis System (ATLAS)

    Science.gov (United States)

    O'Neil, Daniel A.; Mankins, John C.

    2004-01-01

    Developing credible mass and cost estimates for space exploration and development architectures require multidisciplinary analysis based on physics calculations, and parametric estimates derived from historical systems. Within the National Aeronautics and Space Administration (NASA), concurrent engineering environment (CEE) activities integrate discipline oriented analysis tools through a computer network and accumulate the results of a multidisciplinary analysis team via a centralized database or spreadsheet Each minute of a design and analysis study within a concurrent engineering environment is expensive due the size of the team and supporting equipment The Advanced Technology Lifecycle Analysis System (ATLAS) reduces the cost of architecture analysis by capturing the knowledge of discipline experts into system oriented spreadsheet models. A framework with a user interface presents a library of system models to an architecture analyst. The analyst selects models of launchers, in-space transportation systems, and excursion vehicles, as well as space and surface infrastructure such as propellant depots, habitats, and solar power satellites. After assembling the architecture from the selected models, the analyst can create a campaign comprised of missions spanning several years. The ATLAS controller passes analyst specified parameters to the models and data among the models. An integrator workbook calls a history based parametric analysis cost model to determine the costs. Also, the integrator estimates the flight rates, launched masses, and architecture benefits over the years of the campaign. An accumulator workbook presents the analytical results in a series of bar graphs. In no way does ATLAS compete with a CEE; instead, ATLAS complements a CEE by ensuring that the time of the experts is well spent Using ATLAS, an architecture analyst can perform technology sensitivity analysis, study many scenarios, and see the impact of design decisions. When the analyst is

  1. The ATLAS tracker strip detector for HL-LHC

    CERN Document Server

    Cormier, Kyle James Read; The ATLAS collaboration

    2016-01-01

    As part of the ATLAS upgrades for the High Luminsotiy LHC (HL-LHC) the current ATLAS Inner Detector (ID) will be replaced by a new Inner Tracker (ITk). The ITk will consist of two main components: semi-conductor pixels at the innermost radii, and silicon strips covering larger radii out as far as the ATLAS solenoid magnet including the volume currently occupied by the ATLAS Transition Radiation Tracker (TRT). The primary challenges faced by the ITk are the higher planned read out rate of ATLAS, the high density of charged particles in HL-LHC conditions for which tracks need to be resolved, and the corresponding high radiation doses that the detector and electronics will receive. The ITk strips community is currently working on designing and testing all aspects of the sensors, readout, mechanics, cooling and integration to meet these goals and a Technical Design Report is being prepared. This talk is an overview of the strip detector component of the ITk, highlighting the current status and the road ahead.

  2. The ATLAS tracker strip detector for HL-LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00512833; The ATLAS collaboration

    2017-01-01

    As part of the ATLAS upgrades for the High Luminsotiy LHC (HL-LHC) the current ATLAS Inner Detector (ID) will be replaced by a new Inner Tracker (ITk). The ITk will consist of two main components: semi-conductor pixels at the innermost radii, and silicon strips covering larger radii out as far as the ATLAS solenoid magnet including the volume currently occupied by the ATLAS Transition Radiation Tracker (TRT). The primary challenges faced by the ITk are the higher planned read out rate of ATLAS, the high density of charged particles in HL-LHC conditions for which tracks need to be resolved, and the corresponding high radiation doses that the detector and electronics will receive. The ITk strips community is currently working on designing and testing all aspects of the sensors, readout, mechanics, cooling and integration to meet these goals and a Technical Design Report is being prepared. This talk is an overview of the strip detector component of the ITk, highlighting the current status and the road ahead.

  3. ATLAS Level-1 Calorimeter Trigger: Status and Development

    CERN Document Server

    Bracinik, J; The ATLAS collaboration

    2013-01-01

    The ATLAS Level-1 Calorimeter Trigger seeds all the calorimeter-based triggers in the ATLAS experiment at LHC. The inputs to the system are analogue signals of reduced granularity, formed by summing cells from both the ATLAS Liquid Argon and Tile calorimeters. Several stages of analogue then digital processing, largely performed in FPGAs, refine these signals via configurable and flexible algorithms into identified physics objects, for example electron, tau or jet candidates. The complete processing chain is performed in a pipelined system at the LHC bunch-crossing frequency, and with a fixed latency of about 1us. The first LHC run from 2009-2013 provided a varied and challenging environment for first level triggers. While the energy and luminosity were below the LHC design, the pile-up conditions were similar to the nominal conditions. The physics ambitions of the experiment also tested the performance of the Level-1 system while keeping within the rate limits set by detector readout. This presentation will ...

  4. Evolution of the ATLAS Metadata Interface (AMI)

    CERN Document Server

    Odier, Jerome; The ATLAS collaboration; Fulachier, Jerome; Lambert, Fabian

    2015-01-01

    The ATLAS Metadata Interface (AMI) can be considered to be a mature application because it has existed for at least 10 years. Over the years, the number of users and the number of functions provided for these users has increased. It has been necessary to adapt the hardware infrastructure in a seamless way so that the Quality of Service remains high. We will describe the evolution of the application from the initial one, using single server with a MySQL backend database, to the current state, where we use a cluster of Virtual Machines on the French Tier 1 Cloud at Lyon, an ORACLE database backend also at Lyon, with replication to CERN using ORACLE streams behind a back-up server.

  5. Construction of a PIXE database for supporting PIXE studies. Database of experimental conditions and elemental concentration for various samples

    International Nuclear Information System (INIS)

    Itoh, J.; Saitoh, Y.; Futatsugawa, S.; Sera, K.; Ishii, K.

    2007-01-01

    A database of PIXE data, which have been accumulated at NMCC, has been constructed. In order to fill up the database, data are newly obtained as many as possible for the kind of samples whose number is small. In addition, the data for different measuring conditions are obtained for several samples. As the number of γ-ray spectrum obtained with a HPGe detector for the purpose of analyzing light elements such as fluorine, is overwhelmingly small in comparison with that of usual PIXE spectra, γ-ray spectrum and elemental concentration of fluorine are obtained as many as possible for food, environmental and hair samples. In addition, the data taken with an in-air PIXE system have been obtained for various samples. As a result, the database involving contents over various research fields is constructed, and it is expected to be useful for researches who make use of analytical techniques. It is expected that this work will give a start to many researchers to participate in the database and to make calibration with each other in order to establish reliable analytical techniques. Moreover, the final goal of the database is to establish the control concentration values for typical samples. As the first step of establishing the control values, average elemental concentration and its standard deviations in hair samples taken from 405 healthy Japanese are obtained and tabulated according to their sex and age. (author)

  6. High resolution heat atlases for demand and supply mapping

    DEFF Research Database (Denmark)

    Möller, Bernd; Nielsen, Steffen

    2014-01-01

    Significant reductions of heat demand, low-carbon and renewable energy sources, and district heating are key elements in 100% renewable energy systems. Appraisal of district heating along with energy efficient buildings and individual heat supply requires a geographical representation of heat...... demand, energy efficiency and energy supply. The present paper describes a Heat Atlas built around a spatial database using geographical information systems (GIS). The present atlas allows for per-building calculations of potentials and costs of energy savings, connectivity to existing district heat......, and current heat supply and demand. For the entire building mass a conclusive link is established between the built environment and its heat supply. The expansion of district heating; the interconnection of distributed district heating systems; or the question whether to invest in ultra-efficient buildings...

  7. ATLAS Outreach Highlights

    CERN Document Server

    Cheatham, Susan; The ATLAS collaboration

    2016-01-01

    The ATLAS outreach team is very active, promoting particle physics to a broad range of audiences including physicists, general public, policy makers, students and teachers, and media. A selection of current outreach activities and new projects will be presented. Recent highlights include the new ATLAS public website and ATLAS Open Data, the very recent public release of 1 fb-1 of ATLAS data.

  8. ATLAS Simulation using Real Data: Embedding and Overlay

    CERN Document Server

    Haas, Andy; The ATLAS collaboration

    2016-01-01

    For some physics processes studied with the ATLAS detector, a more accurate simulation in some respects can be achieved by including real data into simulated events, with substantial potential improvements in the CPU, disk space, and memory usage of the standard simulation configuration, at the cost of significant database and networking challenges. Real proton-proton background events can be overlaid (at the detector digitization output stage) on a simulated hard-scatter process, to account for pileup background (from nearby bunch crossings), cavern background, and detector noise. A similar method is used to account for the large underlying event from heavy ion collisions, rather than directly simulating the full collision. Embedding replaces the muons found in Z->mumu decays in data with simulated taus at the same 4-momenta, thus preserving the underlying event and pileup from the original data event. In all these cases, care must be taken to exactly match detector conditions (beamspot, magnetic fields, ali...

  9. ATLAS simulation using real data: Embedding and overlay

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00053405; The ATLAS collaboration

    2017-01-01

    For some physics processes studied with the ATLAS detector, a more accurate simulation in some respects can be achieved by including real data into simulated events, with substantial potential improvements in the CPU, disk space, and memory usage of the standard simulation configuration, at the cost of significant database and networking challenges. Real proton-proton background events can be overlaid (at the detector digitization output stage) on a simulated hard-scatter process, to account for pileup background (from nearby bunch crossings), cavern background, and detector noise. A similar method is used to account for the large underlying event from heavy ion collisions, rather than directly simulating the full collision. Embedding replaces the muons found in Z→μμ decays in data with simulated taus at the same 4-momenta, thus preserving the underlying event and pileup from the original data event. In all these cases, care must be taken to exactly match detector conditions (beamspot, magnetic fields, al...

  10. Integrated System for Performance Monitoring of ATLAS TDAQ Network

    CERN Document Server

    Savu, D; The ATLAS collaboration; Martin, B; Sjoen, R; Batraneanu, S; Stancu, S

    2010-01-01

    The ATLAS TDAQ Network consists of three separate networks spanning four levels of the experimental building. Over 200 edge switches and 5 multi-blade chassis routers are used to interconnect 2000 processors, adding up to more than 7000 high speed interfaces. In order to substantially speed-up ad-hoc and post mortem analysis, a scalable, yet flexible, integrated system for monitoring both network statistics and environmental conditions, processor parameters and data taking characteristics was required. For successful up-to-the-minute monitoring, information from many SNMP compliant devices, independent databases and custom APIs was gathered, stored and displayed in an optimal way. Easy navigation and compact aggregation of multiple data sources were the main requirements; characteristics not found in any of the tested products, either open-source or commercial. This paper describes how performance, scalability and display issues were addressed and what challenges the project faced during development and deplo...

  11. CERN Open Days 2013, Point 1 - ATLAS: ATLAS Experiment

    CERN Multimedia

    CERN Photolab

    2013-01-01

    Stand description: The ATLAS Experiment at CERN is one of the largest and most complex scientific endeavours ever assembled. The detector, located at collision point 1 of the LHC, is designed to explore the fundamental components of nature and to study the forces that shape our universe. The past year’s discovery of a Higgs boson is one of the most important scientific achievements of our time, yet this is only one of many key goals of ATLAS. During a brief break in their journey, some of the 3000-member ATLAS collaboration will be taking time to share the excitement of this exploration with you. On surface no restricted access  The exhibit at Point 1 will give visitors a chance to meet these modern-day explorers and to learn from them how answers to the most fundamental questions of mankind are being sought. Activities will include a visit to the ATLAS detector, located 80m below ground; watching the prize-winning ATLAS movie in the ATLAS cinema; seeing real particle tracks in a cloud chamber and discussi...

  12. Performance of the ATLAS Precision Muon Chambers under LHC Operating Conditions

    CERN Document Server

    Deile, M.; Dubbert, J; Horvat, S; Kortner, O; Kroha, H; Manz, A; Mohrdieck, S; Rauscher, F; Richter, Robert; Staude, A

    2004-01-01

    For the muon spectrometer of the ATLAS detector at the large hadron collider (LHC), large drift chambers consisting of 6 to 8 layers of pressurized drift tubes are used for precision tracking covering an active area of 5000 m2 in the toroidal ?eld of superconducting air core magnets. The chambers have to provide a spatial resolution of 41 microns with Ar:CO2 (93:7) gas mixture at an absolute pressure of 3 bar and gas gain of 2?104. The environment in which the chambers will be operated is characterized by high neutron and background with counting rates of up to 100 per square cm and second. The resolution and efficiency of a chamber from the serial production for ATLAS has been investigated in a 100 GeV muon beam at photon irradiation rates as expected during LHC operation. A silicon strip detector telescope was used as external reference in the beam. The spatial resolution of a chamber is degraded by 4 ?m at the highest background rate. The detection e?ciency of the drift tubes is unchanged under irradiation...

  13. Evolution of the architecture of the ATLAS Metadata Interface (AMI)

    Science.gov (United States)

    Odier, J.; Aidel, O.; Albrand, S.; Fulachier, J.; Lambert, F.

    2015-12-01

    The ATLAS Metadata Interface (AMI) is now a mature application. Over the years, the number of users and the number of provided functions has dramatically increased. It is necessary to adapt the hardware infrastructure in a seamless way so that the quality of service re - mains high. We describe the AMI evolution since its beginning being served by a single MySQL backend database server to the current state having a cluster of virtual machines at French Tier1, an Oracle database at Lyon with complementary replication to the Oracle DB at CERN and AMI back-up server.

  14. Cyberinfrastructure for the digital brain: spatial standards for integrating rodent brain atlases.

    Science.gov (United States)

    Zaslavsky, Ilya; Baldock, Richard A; Boline, Jyl

    2014-01-01

    Biomedical research entails capture and analysis of massive data volumes and new discoveries arise from data-integration and mining. This is only possible if data can be mapped onto a common framework such as the genome for genomic data. In neuroscience, the framework is intrinsically spatial and based on a number of paper atlases. This cannot meet today's data-intensive analysis and integration challenges. A scalable and extensible software infrastructure that is standards based but open for novel data and resources, is required for integrating information such as signal distributions, gene-expression, neuronal connectivity, electrophysiology, anatomy, and developmental processes. Therefore, the International Neuroinformatics Coordinating Facility (INCF) initiated the development of a spatial framework for neuroscience data integration with an associated Digital Atlasing Infrastructure (DAI). A prototype implementation of this infrastructure for the rodent brain is reported here. The infrastructure is based on a collection of reference spaces to which data is mapped at the required resolution, such as the Waxholm Space (WHS), a 3D reconstruction of the brain generated using high-resolution, multi-channel microMRI. The core standards of the digital atlasing service-oriented infrastructure include Waxholm Markup Language (WaxML): XML schema expressing a uniform information model for key elements such as coordinate systems, transformations, points of interest (POI)s, labels, and annotations; and Atlas Web Services: interfaces for querying and updating atlas data. The services return WaxML-encoded documents with information about capabilities, spatial reference systems (SRSs) and structures, and execute coordinate transformations and POI-based requests. Key elements of INCF-DAI cyberinfrastructure have been prototyped for both mouse and rat brain atlas sources, including the Allen Mouse Brain Atlas, UCSD Cell-Centered Database, and Edinburgh Mouse Atlas Project.

  15. Cyberinfrastructure for the digital brain: spatial standards for integrating rodent brain atlases

    Directory of Open Access Journals (Sweden)

    Ilya eZaslavsky

    2014-09-01

    Full Text Available Biomedical research entails capture and analysis of massive data volumes and new discoveries arise from data-integration and mining. This is only possible if data can be mapped onto a common framework such as the genome for genomic data. In neuroscience, the framework is intrinsically spatial and based on a number of paper atlases. This cannot meet today’s data-intensive analysis and integration challenges. A scalable and extensible software infrastructure that is standards based but open for novel data and resources, is required for integrating information such as signal distributions, gene-expression, neuronal connectivity, electrophysiology, anatomy, and developmental processes. Therefore, the International Neuroinformatics Coordinating Facility (INCF initiated the development of a spatial framework for neuroscience data integration with an associated Digital Atlasing Infrastructure (DAI. A prototype implementation of this infrastructure for the rodent brain is reported here. The infrastructure is based on a collection of reference spaces to which data is mapped at the required resolution, such as the Waxholm Space (WHS, a 3D reconstruction of the brain generated using high-resolution, multi-channel microMRI. The core standards of the digital atlasing service-oriented infrastructure include Waxholm Markup Language (WaxML: XML schema expressing a uniform information model for key elements such as coordinate systems, transformations, points of interest (POIs, labels, and annotations; and Atlas Web Services: interfaces for querying and updating atlas data. The services return WaxML-encoded documents with information about capabilities, spatial reference systems and structures, and execute coordinate transformations and POI-based requests. Key elements of INCF-DAI cyberinfrastructure have been prototyped for both mouse and rat brain atlas sources, including the Allen Mouse Brain Atlas, UCSD Cell-Centered Database, and Edinburgh Mouse Atlas

  16. Performance and scalability of the back-end sub-system in the ATLAS DAQ/EF prototype

    CERN Document Server

    Alexandrov, I N; Badescu, E; Burckhart, Doris; Caprini, M; Cohen, L; Duval, P Y; Hart, R; Jones, R; Kazarov, A; Kolos, S; Kotov, V; Laugier, D; Mapelli, Livio P; Moneta, L; Qian, Z; Radu, A A; Ribeiro, C A; Roumiantsev, V; Ryabov, Yu; Schweiger, D; Soloviev, I V

    2000-01-01

    The DAQ group of the future ATLAS experiment has developed a prototype system based on the trigger/DAQ architecture described in the ATLAS Technical Proposal to support studies of the full system functionality, architecture as well as available hardware and software technologies. One sub-system of this prototype is the back- end which encompasses the software needed to configure, control and monitor the DAQ, but excludes the processing and transportation of physics data. The back-end consists of a number of components including run control, configuration databases and message reporting system. The software has been developed using standard, external software technologies such as OO databases and CORBA. It has been ported to several C++ compilers and operating systems including Solaris, Linux, WNT and LynxOS. This paper gives an overview of the back-end software, its performance, scalability and current status. (17 refs).

  17. ATLAS (Automatic Tool for Local Assembly Structures) - A Comprehensive Infrastructure for Assembly, Annotation, and Genomic Binning of Metagenomic and Metaranscripomic Data

    Energy Technology Data Exchange (ETDEWEB)

    White, Richard A.; Brown, Joseph M.; Colby, Sean M.; Overall, Christopher C.; Lee, Joon-Yong; Zucker, Jeremy D.; Glaesemann, Kurt R.; Jansson, Georg C.; Jansson, Janet K.

    2017-03-02

    ATLAS (Automatic Tool for Local Assembly Structures) is a comprehensive multiomics data analysis pipeline that is massively parallel and scalable. ATLAS contains a modular analysis pipeline for assembly, annotation, quantification and genome binning of metagenomics and metatranscriptomics data and a framework for reference metaproteomic database construction. ATLAS transforms raw sequence data into functional and taxonomic data at the microbial population level and provides genome-centric resolution through genome binning. ATLAS provides robust taxonomy based on majority voting of protein coding open reading frames rolled-up at the contig level using modified lowest common ancestor (LCA) analysis. ATLAS provides robust taxonomy based on majority voting of protein coding open reading frames rolled-up at the contig level using modified lowest common ancestor (LCA) analysis. ATLAS is user-friendly, easy install through bioconda maintained as open-source on GitHub, and is implemented in Snakemake for modular customizable workflows.

  18. An Oracle-based event index for ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00083337; The ATLAS collaboration; Dimitrov, Gancho

    2017-01-01

    The ATLAS Eventlndex System has amassed a set of key quantities for a large number of ATLAS events into a Hadoop based infrastructure for the purpose of providing the experiment with a number of event-wise services. Collecting this data in one place provides the opportunity to investigate various storage formats and technologies and assess which best serve the various use cases as well as consider what other benefits alternative storage systems provide. In this presentation we describe how the data are imported into an Oracle RDBMS (relational database management system), the services we have built based on this architecture, and our experience with it. We’ve indexed about 26 billion real data events thus far and have designed the system to accommodate future data which has expected rates of 5 and 20 billion events per year. We have found this system offers outstanding performance for some fundamental use cases. In addition, profiting from the co-location of this data with other complementary metadata in AT...

  19. Toolkit for data reduction to tuples for the ATLAS experiment

    International Nuclear Information System (INIS)

    Snyder, Scott; Krasznahorkay, Attila

    2012-01-01

    The final step in a HEP data-processing chain is usually to reduce the data to a ‘tuple’ form which can be efficiently read by interactive analysis tools such as ROOT. Often, this is implemented independently by each group analyzing the data, leading to duplicated effort and needless divergence in the format of the reduced data. ATLAS has implemented a common toolkit for performing this processing step. By using tools from this package, physics analysis groups can produce tuples customized for a particular analysis but which are still consistent in format and vocabulary with those produced by other physics groups. The package is designed so that almost all the code is independent of the specific form used to store the tuple. The code that does depend on this is grouped into a set of small backend packages. While the ROOT backend is the most used, backends also exist for HDF5 and for specialized databases. By now, the majority of ATLAS analyses rely on this package, and it is an important contributor to the ability of ATLAS to rapidly analyze physics data.

  20. The version control service for ATLAS data acquisition configuration files

    CERN Document Server

    Soloviev, Igor; The ATLAS collaboration

    2012-01-01

    To configure data taking session the ATLAS systems and detectors store more than 160 MBytes of data acquisition related configuration information in OKS XML files [1]. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems caused by XML syntax errors or inconsistent state of files from a point of view of the overall ATLAS configuration. It was not always possible to know who made a modification causing problems or how to go back to a previous version of the modified file. Few years ago a special service addressing these issues has been implemented and deployed on ATLAS Point-1. It excludes direct write access to XML files stored in a central database repository. Instead, for an update the files are copied into a user repository, validated after modifications and committed using a version control system. The system's callback updates the central repository. Also, it keeps track of all modifications pro...

  1. ATLAS DBM Module Qualification

    Energy Technology Data Exchange (ETDEWEB)

    Soha, Aria [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gorisek, Andrej [J. Stefan Inst., Ljubljana (Slovenia); Zavrtanik, Marko [J. Stefan Inst., Ljubljana (Slovenia); Sokhranyi, Grygorii [J. Stefan Inst., Ljubljana (Slovenia); McGoldrick, Garrin [Univ. of Toronto, ON (Canada); Cerv, Matevz [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2014-06-18

    This is a technical scope of work (TSW) between the Fermi National Accelerator Laboratory (Fermilab) and the experimenters of Jozef Stefan Institute, CERN, and University of Toronto who have committed to participate in beam tests to be carried out during the 2014 Fermilab Test Beam Facility program. Chemical Vapour Deposition (CVD) diamond has a number of properties that make it attractive for high energy physics detector applications. Its large band-gap (5.5 eV) and large displacement energy (42 eV/atom) make it a material that is inherently radiation tolerant with very low leakage currents and high thermal conductivity. CVD diamond is being investigated by the RD42 Collaboration for use very close to LHC interaction regions, where the most extreme radiation conditions are found. This document builds on that work and proposes a highly spatially segmented diamond-based luminosity monitor to complement the time-segmented ATLAS Beam Conditions Monitor (BCM) so that, when Minimum Bias Trigger Scintillators (MTBS) and LUCID (LUminosity measurement using a Cherenkov Integrating Detector) have difficulty functioning, the ATLAS luminosity measurement is not compromised.

  2. A Database for Climatic Conditions around Europe for Promoting GSHP Solutions

    Directory of Open Access Journals (Sweden)

    Michele De Carli

    2018-02-01

    Full Text Available Weather plays an important role for energy uses in buildings. For this reason, it is required to define the proper boundary conditions in terms of the different parameters affecting energy and comfort in buildings. They are also the basis for determining the ground temperature in different locations, as well as for determining the potential for using geothermal energy. This paper presents a database for climates in Europe that has been used in a freeware tool developed as part of the H2020 research project named “Cheap-GSHPs”. The standard Köppen-Geiger climate classification has been matched with the weather data provided by the ENERGYPLUS and METEONORM software database. The Test Reference Years of more than 300 locations have been considered. These locations have been labelled according to the degree-days for heating and cooling, as well as by the Köppen-Geiger scale. A comprehensive data set of weather conditions in Europe has been created and used as input for a GSHP sizing software, helping the user in selecting the weather conditions closest to the location of interest. The proposed method is based on lapse rates and has been tested at two locations in Switzerland and Ireland. It has been demonstrated as quite valid for the project purposes, considering the spatial distribution and density of available data and the lower computing load, in particular for locations where altitude is the main factor controlling on the temperature variations.

  3. ATLAS Thesis Award 2017

    CERN Multimedia

    Anthony, Katarina

    2018-01-01

    Winners of the ATLAS Thesis Award were presented with certificates and glass cubes during a ceremony on 22 February, 2018. They are pictured here with Karl Jakobs (ATLAS Spokesperson), Max Klein (ATLAS Collaboration Board Chair) and Katsuo Tokushuku (ATLAS Collaboration Board Deputy Chair).

  4. NASA Technical Interchange Meeting (TIM): Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box

    Science.gov (United States)

    ONeil, D. A.; Craig, D. A.; Christensen, C. B.; Gresham, E. C.

    2005-01-01

    The objective of this Technical Interchange Meeting was to increase the quantity and quality of technical, cost, and programmatic data used to model the impact of investing in different technologies. The focus of this meeting was the Technology Tool Box (TTB), a database of performance, operations, and programmatic parameters provided by technologists and used by systems engineers. The TTB is the data repository used by a system of models known as the Advanced Technology Lifecycle Analysis System (ATLAS). This report describes the result of the November meeting, and also provides background information on ATLAS and the TTB.

  5. ATLAS

    CERN Multimedia

    Akhnazarov, V; Canepa, A; Bremer, J; Burckhart, H; Cattai, A; Voss, R; Hervas, L; Kaplon, J; Nessi, M; Werner, P; Ten kate, H; Tyrvainen, H; Vandelli, W; Krasznahorkay, A; Gray, H; Alvarez gonzalez, B; Eifert, T F; Rolando, G; Oide, H; Barak, L; Glatzer, J; Backhaus, M; Schaefer, D M; Maciejewski, J P; Milic, A; Jin, S; Von torne, E; Limbach, C; Medinnis, M J; Gregor, I; Levonian, S; Schmitt, S; Waananen, A; Monnier, E; Muanza, S G; Pralavorio, P; Talby, M; Tiouchichine, E; Tocut, V M; Rybkin, G; Wang, S; Lacour, D; Laforge, B; Ocariz, J H; Bertoli, W; Malaescu, B; Sbarra, C; Yamamoto, A; Sasaki, O; Koriki, T; Hara, K; Da silva gomes, A; Carvalho maneira, J; Marcalo da palma, A; Chekulaev, S; Tikhomirov, V; Snesarev, A; Buzykaev, A; Maslennikov, A; Peleganchuk, S; Sukharev, A; Kaplan, B E; Swiatlowski, M J; Nef, P D; Schnoor, U; Oakham, G F; Ueno, R; Orr, R S; Abouzeid, O; Haug, S; Peng, H; Kus, V; Vitek, M; Temming, K K; Dang, N P; Meier, K; Schultz-coulon, H; Geisler, M P; Sander, H; Schaefer, U; Ellinghaus, F; Rieke, S; Nussbaumer, A; Liu, Y; Richter, R; Kortner, S; Fernandez-bosman, M; Ullan comes, M; Espinal curull, J; Chiriotti alvarez, S; Caubet serrabou, M; Valladolid gallego, E; Kaci, M; Carrasco vela, N; Lancon, E C; Besson, N E; Gautard, V; Bracinik, J; Bartsch, V C; Potter, C J; Lester, C G; Moeller, V A; Rosten, J; Crooks, D; Mathieson, K; Houston, S C; Wright, M; Jones, T W; Harris, O B; Byatt, T J; Dobson, E; Hodgson, P; Hodgkinson, M C; Dris, M; Karakostas, K; Ntekas, K; Oren, D; Duchovni, E; Etzion, E; Oren, Y; Ferrer, L M; Testa, M; Doria, A; Merola, L; Sekhniaidze, G; Giordano, R; Ricciardi, S; Milazzo, A; Falciano, S; De pedis, D; Dionisi, C; Veneziano, S; Cardarelli, R; Verzegnassi, C; Soualah, R; Ochi, A; Ohshima, T; Kishiki, S; Linde, F L; Vreeswijk, M; Werneke, P; Muijs, A; Vankov, P H; Jansweijer, P P M; Dale, O; Lund, E; Bruckman de renstrom, P; Dabrowski, W; Adamek, J D; Wolters, H; Micu, L; Pantea, D; Tudorache, V; Mjoernmark, J; Klimek, P J; Ferrari, A; Abdinov, O; Akhoundov, A; Hashimov, R; Shelkov, G; Khubua, J; Ladygin, E; Lazarev, A; Glagolev, V; Dedovich, D; Lykasov, G; Zhemchugov, A; Zolnikov, Y; Ryabenko, M; Sivoklokov, S; Vasilyev, I; Shalimov, A; Lobanov, M; Paramoshkina, E; Mosidze, M; Bingul, A; Nodulman, L J; Guarino, V J; Yoshida, R; Drake, G R; Calafiura, P; Haber, C; Quarrie, D R; Alonso, J R; Anderson, C; Evans, H; Lammers, S W; Baubock, M; Anderson, K; Petti, R; Suhr, C A; Linnemann, J T; Richards, R A; Tollefson, K A; Holzbauer, J L; Stoker, D P; Pier, S; Nelson, A J; Isakov, V; Martin, A J; Adelman, J A; Paganini, M; Gutierrez, P; Snow, J M; Pearson, B L; Cleland, W E; Savinov, V; Wong, W; Goodson, J J; Li, H; Lacey, R A; Gordeev, A; Gordon, H; Lanni, F; Nevski, P; Rescia, S; Kierstead, J A; Liu, Z; Yu, W W H; Bensinger, J; Hashemi, K S; Bogavac, D; Cindro, V; Hoeferkamp, M R; Coelli, S; Iodice, M; Piegaia, R N; Alonso, F; Wahlberg, H P; Barberio, E L; Limosani, A; Rodd, N L; Jennens, D T; Hill, E C; Pospisil, S; Smolek, K; Schaile, D A; Rauscher, F G; Adomeit, S; Mattig, P M; Wahlen, H; Volkmer, F; Calvente lopez, S; Sanchis peris, E J; Pallin, D; Podlyski, F; Says, L; Boumediene, D E; Scott, W; Phillips, P W; Greenall, A; Turner, P; Gwilliam, C B; Kluge, T; Wrona, B; Sellers, G J; Millward, G; Adragna, P; Hartin, A; Alpigiani, C; Piccaro, E; Bret cano, M; Hughes jones, R E; Mercer, D; Oh, A; Chavda, V S; Carminati, L; Cavasinni, V; Fedin, O; Patrichev, S; Ryabov, Y; Nesterov, S; Grebenyuk, O; Sasso, J; Mahmood, H; Polsdofer, E; Dai, T; Ferretti, C; Liu, H; Hegazy, K H; Benjamin, D P; Zobernig, G; Ban, J; Brooijmans, G H; Keener, P; Williams, H H; Le geyt, B C; Hines, E J; Fadeyev, V; Schumm, B A; Law, A T; Kuhl, A D; Neubauer, M S; Shang, R; Gagliardi, G; Calabro, D; Conta, C; Zinna, M; Jones, G; Li, J; Stradling, A R; Hadavand, H K; Mcguigan, P; Chiu, P; Baldelomar, E; Stroynowski, R A; Kehoe, R L; De groot, N; Timmermans, C; Lach-heb, F; Addy, T N; Nakano, I; Moreno lopez, D; Grosse-knetter, J; Tyson, B; Rude, G D; Tafirout, R; Benoit, P; Danielsson, H O; Elsing, M; Fassnacht, P; Froidevaux, D; Ganis, G; Gorini, B; Lasseur, C; Lehmann miotto, G; Kollar, D; Aleksa, M; Sfyrla, A; Duehrssen-debling, K; Fressard-batraneanu, S; Van der ster, D C; Bortolin, C; Schumacher, J; Mentink, M; Geich-gimbel, C; Yau wong, K H; Lafaye, R; Crepe-renaudin, S; Albrand, S; Hoffmann, D; Pangaud, P; Meessen, C; Hrivnac, J; Vernay, E; Perus, A; Henrot versille, S L; Le dortz, O; Derue, F; Piccinini, M; Polini, A; Terada, S; Arai, Y; Ikeno, M; Fujii, H; Nagano, K; Ukegawa, F; Aguilar saavedra, J A; Conde muino, P; Castro, N F; Eremin, V; Kopytine, M; Sulin, V; Tsukerman, I; Korol, A; Nemethy, P; Bartoldus, R; Glatte, A; Chelsky, S; Van nieuwkoop, J; Bellerive, A; Sinervo, J K; Battaglia, A; Barbier, G J; Pohl, M; Rosselet, L; Alexandre, G B; Prokoshin, F; Pezoa rivera, R A; Batkova, L; Kladiva, E; Stastny, J; Kubes, T; Vidlakova, Z; Esch, H; Homann, M; Herten, L G; Zimmermann, S U; Pfeifer, B; Stenzel, H; Andrei, G V; Wessels, M; Buescher, V; Kleinknecht, K; Fiedler, F M; Schroeder, C D; Fernandez, E; Mir martinez, L; Vorwerk, V; Bernabeu verdu, J; Salt, J; Civera navarrete, J V; Bernard, R; Berriaud, C P; Chevalier, L P; Hubbard, R; Schune, P; Nikolopoulos, K; Batley, J R; Brochu, F M; Phillips, A W; Teixeira-dias, P J; Rose, M B D; Buttar, C; Buckley, A G; Nurse, E L; Larner, A B; Boddy, C; Henderson, J; Costanzo, D; Tarem, S; Maccarrone, G; Laurelli, P F; Alviggi, M; Chiaramonte, R; Izzo, V; Palumbo, V; Fraternali, M; Crosetti, G; Marchese, F; Yamaguchi, Y; Hessey, N P; Mechnich, J M; Liebig, W; Kastanas, K A; Sjursen, T B; Zalieckas, J; Cameron, D G; Banka, P; Kowalewska, A B; Dwuznik, M; Mindur, B; Boldea, V; Hedberg, V; Smirnova, O; Sellden, B; Allahverdiyev, T; Gornushkin, Y; Koultchitski, I; Tokmenin, V; Chizhov, M; Gongadze, A; Khramov, E; Sadykov, R; Krasnoslobodtsev, I; Smirnova, L; Kramarenko, V; Minaenko, A; Zenin, O; Beddall, A J; Ozcan, E V; Hou, S; Wang, S; Moyse, E; Willocq, S; Chekanov, S; Le compte, T J; Love, J R; Ciocio, A; Hinchliffe, I; Tsulaia, V; Gomez, A; Luehring, F; Zieminska, D; Huth, J E; Gonski, J L; Oreglia, M; Tang, F; Shochet, M J; Costin, T; Mcleod, A; Uzunyan, S; Martin, S P; Pope, B G; Schwienhorst, R H; Brau, J E; Ptacek, E S; Milburn, R H; Sabancilar, E; Lauer, R; Saleem, M; Mohamed meera lebbai, M R; Lou, X; Reeves, K B; Rijssenbeek, M; Novakova, P N; Rahm, D; Steinberg, P A; Wenaus, T J; Paige, F; Ye, S; Kotcher, J R; Assamagan, K A; Oliveira damazio, D; Maeno, T; Henry, A; Dushkin, A; Costa, G; Meroni, C; Resconi, S; Lari, T; Biglietti, M; Lohse, T; Gonzalez silva, M L; Monticelli, F G; Saavedra, A F; Patel, N D; Ciodaro xavier, T; Asevedo nepomuceno, A; Lefebvre, M; Albert, J E; Kubik, P; Faltova, J; Turecek, D; Solc, J; Schaile, O; Ebke, J; Losel, P J; Zeitnitz, C; Sturm, P D; Barreiro alonso, F; Modesto alapont, P; Soret medel, J; Garzon alama, E J; Gee, C N; Mccubbin, N A; Sankey, D; Emeliyanov, D; Dewhurst, A L; Houlden, M A; Klein, M; Burdin, S; Lehan, A K; Eisenhandler, E; Lloyd, S; Traynor, D P; Ibbotson, M; Marshall, R; Pater, J; Freestone, J; Masik, J; Haughton, I; Manousakis katsikakis, A; Sampsonidis, D; Krepouri, A; Roda, C; Sarri, F; Fukunaga, C; Nadtochiy, A; Kara, S O; Timm, S; Alam, S M; Rashid, T; Goldfarb, S; Espahbodi, S; Marley, D E; Rau, A W; Dos anjos, A R; Haque, S; Grau, N C; Havener, L B; Thomson, E J; Newcomer, F M; Hansl-kozanecki, G; Deberg, H A; Takeshita, T; Goggi, V; Ennis, J S; Olness, F I; Kama, S; Ordonez sanz, G; Koetsveld, F; Elamri, M; Mansoor-ul-islam, S; Lemmer, B; Kawamura, G; Bindi, M; Schulte, S; Kugel, A; Kretz, M P; Kurchaninov, L; Blanchot, G; Chromek-burckhart, D; Di girolamo, B; Francis, D; Gianotti, F; Nordberg, M Y; Pernegger, H; Roe, S; Boyd, J; Wilkens, H G; Pauly, T; Fabre, C; Tricoli, A; Bertet, D; Ruiz martinez, M A; Arnaez, O L; Lenzi, B; Boveia, A J; Gillberg, D I; Davies, J M; Zimmermann, R; Uhlenbrock, M; Kraus, J K; Narayan, R T; John, A; Dam, M; Padilla aranda, C; Bellachia, F; Le flour chollet, F M; Jezequel, S; Dumont dayot, N; Fede, E; Mathieu, M; Gensolen, F D; Alio, L; Arnault, C; Bouchel, M; Ducorps, A; Kado, M M; Lounis, A; Zhang, Z P; De vivie de regie, J; Beau, T; Bruni, A; Bruni, G; Grafstrom, P; Romano, M; Lasagni manghi, F; Massa, L; Shaw, K; Ikegami, Y; Tsuno, S; Kawanishi, Y; Benincasa, G; Blagov, M; Fedorchuk, R; Shatalov, P; Romaniouk, A; Belotskiy, K; Timoshenko, S; Hooft van huysduynen, L; Lewis, G H; Wittgen, M M; Mader, W F; Rudolph, C J; Gumpert, C; Mamuzic, J; Rudolph, G; Schmid, P; Corriveau, F; Belanger-champagne, C; Yarkoni, S; Leroy, C; Koffas, T; Harack, B D; Weber, M S; Beck, H; Leger, A; Gonzalez sevilla, S; Zhu, Y; Gao, J; Zhang, X; Blazek, T; Rames, J; Sicho, P; Kouba, T; Sluka, T; Lysak, R; Ristic, B; Kompatscher, A E; Von radziewski, H; Groll, M; Meyer, C P; Oberlack, H; Stonjek, S M; Cortiana, G; Werthenbach, U; Ibragimov, I; Czirr, H S; Cavalli-sforza, M; Puigdengoles olive, C; Tallada crespi, P; Marti i garcia, S; Gonzalez de la hoz, S; Guyot, C; Meyer, J; Schoeffel, L O; Garvey, J; Hawkes, C; Hillier, S J; Staley, R J; Salvatore, P F; Santoyo castillo, I; Carter, J; Yusuff, I B; Barlow, N R; Berry, T S; Savage, G; Wraight, K G; Steele, G E; Hughes, G; Walder, J W; Love, P A; Crone, G J; Waugh, B M; Boeser, S; Sarkar, A M; Holmes, A; Massey, R; Pinder, A; Nicholson, R; Korolkova, E; Katsoufis, I; Maltezos, S; Tsipolitis, G; Leontsinis, S; Levinson, L J; Shoa, M; Abramowicz, H E; Bella, G; Gershon, A; Urkovsky, E; Taiblum, N; Gatti, C; Della pietra, M; Lanza, A; Negri, A; Flaminio, V; Lacava, F; Petrolo, E; Pontecorvo, L; Rosati, S; Zanello, L; Pasqualucci, E; Di ciaccio, A; Giordani, M; Yamazaki, Y; Jinno, T; Nomachi, M; De jong, P J; Ferrari, P; Homma, J; Van der graaf, H; Igonkina, O B; Stugu, B S; Buanes, T; Pedersen, M; Turala, M; Olszewski, A J; Koperny, S Z; Onofre, A; Castro nunes fiolhais, M; Alexa, C; Cuciuc, C M; Akesson, T P A; Hellman, S L; Milstead, D A; Bondyakov, A; Pushnova, V; Budagov, Y; Minashvili, I; Romanov, V; Sniatkov, V; Tskhadadze, E; Kalinovskaya, L; Shalyugin, A; Tavkhelidze, A; Rumyantsev, L; Karpov, S; Soloshenko, A; Vostrikov, A; Borissov, E; Solodkov, A; Vorob'ev, A; Sidorov, S; Malyaev, V; Lee, S; Grudzinski, J J; Virzi, J S; Vahsen, S E; Lys, J; Penwell, J W; Yan, Z; Bernard, C S; Barreiro guimaraes da costa, J P; Oliver, J N; Merritt, F S; Brubaker, E M; Kapliy, A; Kim, J; Zutshi, V V; Burghgrave, B O; Abolins, M A; Arabidze, G; Caughron, S A; Frey, R E; Radloff, P T; Schernau, M; Murillo garcia, R; Porter, R A; Mccormick, C A; Karn, P J; Sliwa, K J; Demers konezny, S M; Strauss, M G; Mueller, J A; Izen, J M; Klimentov, A; Lynn, D; Polychronakos, V; Radeka, V; Sondericker, J I I I; Bathe, S; Duffin, S; Chen, H; De castro faria salgado, P E; Kersevan, B P; Lacker, H M; Schulz, H; Kubota, T; Tan, K G; Yabsley, B D; Nunes de moura junior, N; Pinfold, J; Soluk, R A; Ouellette, E A; Leitner, R; Sykora, T; Solar, M; Sartisohn, G; Hirschbuehl, D; Huning, D; Fischer, J; Terron cuadrado, J; Glasman kuguel, C B; Lacasta llacer, C; Lopez-amengual, J; Calvet, D; Chevaleyre, J; Daudon, F; Montarou, G; Guicheney, C; Calvet, S P J; Tyndel, M; Dervan, P J; Maxfield, S J; Hayward, H S; Beck, G; Cox, B; Da via, C; Paschalias, P; Manolopoulou, M; Ragusa, F; Cimino, D; Ezzi, M; Fiuza de barros, N F; Yildiz, H; Ciftci, A K; Turkoz, S; Zain, S B; Tegenfeldt, F; Chapman, J W; Panikashvili, N; Bocci, A; Altheimer, A D; Martin, F F; Fratina, S; Jackson, B D; Grillo, A A; Seiden, A; Watts, G T; Mangiameli, S; Johns, K A; O'grady, F T; Errede, D R; Darbo, G; Ferretto parodi, A; Leahu, M C; Farbin, A; Ye, J; Liu, T; Wijnen, T A; Naito, D; Takashima, R; Sandoval usme, C E; Zinonos, Z; Moreno llacer, M; Agricola, J B; Mcgovern, S A; Sakurai, Y; Trigger, I M; Qing, D; De silva, A S; Butin, F; Dell'acqua, A; Hawkings, R J; Lamanna, M; Mapelli, L; Passardi, G; Rembser, C; Tremblet, L; Andreazza, W; Dobos, D A; Koblitz, B; Bianco, M; Dimitrov, G V; Schlenker, S; Armbruster, A J; Rammensee, M C; Romao rodrigues, L F; Peters, K; Pozo astigarraga, M E; Yi, Y; Desch, K K; Huegging, F G; Muller, K K; Stillings, J A; Schaetzel, S; Xella, S; Hansen, J D; Colas, J; Daguin, G; Wingerter, I; Ionescu, G D; Ledroit, F; Lucotte, A; Clement, B E; Stark, J; Clemens, J; Djama, F; Knoops, E; Coadou, Y; Vigeolas-choury, E; Feligioni, L; Iconomidou-fayard, L; Imbert, P; Schaffer, A C; Nikolic, I; Trincaz-duvoid, S; Warin, P; Camard, A F; Ridel, M; Pires, S; Giacobbe, B; Spighi, R; Villa, M; Negrini, M; Sato, K; Gavrilenko, I; Akimov, A; Khovanskiy, V; Talyshev, A; Voronkov, A; Hakobyan, H; Mallik, U; Shibata, A; Konoplich, R; Barklow, T L; Koi, T; Straessner, A; Stelzer, B; Robertson, S H; Vachon, B; Stoebe, M; Keyes, R A; Wang, K; Billoud, T R V; Strickland, V; Batygov, M; Krieger, P; Palacino caviedes, G D; Gay, C W; Jiang, Y; Han, L; Liu, M; Zenis, T; Lokajicek, M; Staroba, P; Tasevsky, M; Popule, J; Svatos, M; Seifert, F; Landgraf, U; Lai, S T; Schmitt, K H; Achenbach, R; Schuh, N; Kiesling, C; Macchiolo, A; Nisius, R; Schacht, P; Von der schmitt, J G; Kortner, O; Atlay, N B; Segura sole, E; Grinstein, S; Neissner, C; Bruckner, D M; Oliver garcia, E; Boonekamp, M; Perrin, P; Gaillot, F M; Wilson, J A; Thomas, J P; Thompson, P D; Palmer, J D; Falk, I E; Chavez barajas, C A; Sutton, M R; Robinson, D; Kaneti, S A; Wu, T; Robson, A; Shaw, C; Buzatu, A; Qin, G; Jones, R; Bouhova-thacker, E V; Viehhauser, G; Weidberg, A R; Gilbert, L; Johansson, P D C; Orphanides, M; Vlachos, S; Behar harpaz, S; Papish, O; Lellouch, D J H; Turgeman, D; Benary, O; La rotonda, L; Vena, R; Tarasio, A; Marzano, F; Gabrielli, A; Di stante, L; Liberti, B; Aielli, G; Oda, S; Nozaki, M; Takeda, H; Hayakawa, T; Miyazaki, K; Maeda, J; Sugimoto, T; Pettersson, N E; Bentvelsen, S; Groenstege, H L; Lipniacka, A; Vahabi, M; Ould-saada, F; Chwastowski, J J; Hajduk, Z; Kaczmarska, A; Olszowska, J B; Trzupek, A; Staszewski, R P; Palka, M; Constantinescu, S; Jarlskog, G; Lundberg, B L A; Pearce, M; Ellert, M F; Bannikov, A; Fechtchenko, A; Iambourenko, V; Kukhtin, V; Pozdniakov, V; Topilin, N; Vorozhtsov, S; Khassanov, A; Fliaguine, V; Kharchenko, D; Nikolaev, K; Kotenov, K; Kozhin, A; Zenin, A; Ivashin, A; Golubkov, D; Beddall, A; Su, D; Dallapiccola, C J; Cranshaw, J M; Price, L; Stanek, R W; Gieraltowski, G; Zhang, J; Gilchriese, M; Shapiro, M; Ahlen, S; Morii, M; Taylor, F E; Miller, R J; Phillips, F H; Torrence, E C; Wheeler, S J; Benedict, B H; Napier, A; Hamilton, S F; Petrescu, T A; Boyd, G R J; Jayasinghe, A L; Smith, J M; Mc carthy, R L; Adams, D L; Le vine, M J; Zhao, X; Patwa, A M; Baker, M; Kirsch, L; Krstic, J; Simic, L; Filipcic, A; Seidel, S C; Cantore-cavalli, D; Baroncelli, A; Kind, O M; Scarcella, M J; Maidantchik, C L L; Seixas, J; Balabram filho, L E; Vorobel, V; Spousta, M; Strachota, P; Vokac, P; Slavicek, T; Bergmann, B L; Biebel, O; Kersten, S; Srinivasan, M; Trefzger, T; Vazeille, F; Insa, C; Kirk, J; Middleton, R; Burke, S; Klein, U; Morris, J D; Ellis, K V; Millward, L R; Giokaris, N; Ioannou, P; Angelidakis, S; Bouzakis, K; Andreazza, A; Perini, L; Chtcheguelski, V; Spiridenkov, E; Yilmaz, M; Kaya, U; Ernst, J; Mahmood, A; Saland, J; Kutnink, T; Holler, J; Kagan, H P; Wang, C; Pan, Y; Xu, N; Ji, H; Willis, W J; Tuts, P M; Litke, A; Wilder, M; Rothberg, J; Twomey, M S; Rizatdinova, F; Loch, P; Rutherfoord, J P; Varnes, E W; Barberis, D; Osculati-becchi, B; Brandt, A G; Turvey, A J; Benchekroun, D; Nagasaka, Y; Thanakornworakij, T; Quadt, A; Nadal serrano, J; Magradze, E; Nackenhorst, O; Musheghyan, H; Kareem, M; Chytka, L; Perez codina, E; Stelzer-chilton, O; Brunel, B; Henriques correia, A M; Dittus, F; Hatch, M; Haug, F; Hauschild, M; Huhtinen, M; Lichard, P; Schuh-erhard, S; Spigo, G; Avolio, G; Tsarouchas, C; Ahmad, I; Backes, M P; Barisits, M; Gadatsch, S; Cerv, M; Sicoe, A D; Nattamai sekar, L P; Fazio, D; Shan, L; Sun, X; Gaycken, G F; Hemperek, T; Petersen, T C; Alonso diaz, A; Moynot, M; Werlen, M; Hryn'ova, T; Gallin-martel, M; Wu, M; Touchard, F; Menouni, M; Fougeron, D; Le guirriec, E; Chollet, J C; Veillet, J; Barrillon, P; Prat, S; Krasny, M W; Roos, L; Boudarham, G; Lefebvre, G; Boscherini, D; Valentinetti, S; Acharya, B S; Miglioranzi, S; Kanzaki, J; Unno, Y; Yasu, Y; Iwasaki, H; Tokushuku, K; Maio, A; Rodrigues fernandes, B J; Pinto figueiredo raimundo ribeiro, N M; Bot, A; Shmeleva, A; Zaidan, R; Djilkibaev, R; Mincer, A I; Salnikov, A; Aracena, I A; Schwartzman, A G; Silverstein, D J; Fulsom, B G; Anulli, F; Kuhn, D; White, M J; Vetterli, M J; Stockton, M C; Mantifel, R L; Azuelos, G; Shoaleh saadi, D; Savard, P; Clark, A; Ferrere, D; Gaumer, O P; Diaz gutierrez, M A; Liu, Y; Dubnickova, A; Sykora, I; Strizenec, P; Weichert, J; Zitek, K; Naumann, T; Goessling, C; Klingenberg, R; Jakobs, K; Rurikova, Z; Werner, M W; Arnold, H R; Buscher, D; Hanke, P; Stamen, R; Dietzsch, T A; Kiryunin, A; Salihagic, D; Buchholz, P; Pacheco pages, A; Sushkov, S; Porto fernandez, M D C; Cruz josa, R; Vos, M A; Schwindling, J; Ponsot, P; Charignon, C; Kivernyk, O; Goodrick, M J; Hill, J C; Green, B J; Quarman, C V; Bates, R L; Allwood-spiers, S E; Quilty, D; Chilingarov, A; Long, R E; Barton, A E; Konstantinidis, N; Simmons, B; Davison, A R; Christodoulou, V; Wastie, R L; Gallas, E J; Cox, J; Dehchar, M; Behr, J K; Pickering, M A; Filippas, A; Panagoulias, I; Tenenbaum katan, Y D; Roth, I; Pitt, M; Citron, Z H; Benhammou, Y; Amram, N Y N; Soffer, A; Gorodeisky, R; Antonelli, M; Chiarella, V; Curatolo, M; Esposito, B; Nicoletti, G; Martini, A; Sansoni, A; Carlino, G; Del prete, T; Bini, C; Vari, R; Kuna, M; Pinamonti, M; Itoh, Y; Colijn, A P; Klous, S; Garitaonandia elejabarrieta, H; Rosendahl, P L; Taga, A V; Malecki, P; Malecki, P; Wolter, M W; Kowalski, T; Korcyl, G M; Caprini, M; Caprini, I; Dita, P; Olariu, A; Tudorache, A; Lytken, E; Hidvegi, A; Aliyev, M; Alexeev, G; Bardin, D; Kakurin, S; Lebedev, A; Golubykh, S; Chepurnov, V; Gostkin, M; Kolesnikov, V; Karpova, Z; Davkov, K I; Yeletskikh, I; Grishkevich, Y; Rud, V; Myagkov, A; Nikolaenko, V; Starchenko, E; Zaytsev, A; Fakhrutdinov, R; Cheine, I; Istin, S; Sahin, S; Teng, P; Chu, M L; Trilling, G H; Heinemann, B; Richoz, N; Degeorge, C; Youssef, S; Pilcher, J; Cheng, Y; Purohit, M V; Kravchenko, A; Calkins, R E; Blazey, G; Hauser, R; Koll, J D; Reinsch, A; Brost, E C; Allen, B W; Lankford, A J; Ciobotaru, M D; Slagle, K J; Haffa, B; Mann, A; Loginov, A; Cummings, J T; Loyal, J D; Skubic, P L; Boudreau, J F; Lee, B E; Redlinger, G; Wlodek, T; Carcassi, G; Sexton, K A; Yu, D; Deng, W; Metcalfe, J E; Panitkin, S; Sijacki, D; Mikuz, M; Kramberger, G; Tartarelli, G F; Farilla, A; Stanescu, C; Herrberg, R; Alconada verzini, M J; Brennan, A J; Varvell, K; Marroquim, F; Gomes, A A; Do amaral coutinho, Y; Gingrich, D; Moore, R W; Dolejsi, J; Valkar, S; Broz, J; Jindra, T; Kohout, Z; Kral, V; Mann, A W; Calfayan, P P; Langer, T; Hamacher, K; Sanny, B; Wagner, W; Flick, T; Redelbach, A R; Ke, Y; Higon-rodriguez, E; Donini, J N; Lafarguette, P; Adye, T J; Baines, J; Barnett, B; Wickens, F J; Martin, V J; Jackson, J N; Prichard, P; Kretzschmar, J; Martin, A J; Walker, C J; Potter, K M; Kourkoumelis, C; Tzamarias, S; Houiris, A G; Iliadis, D; Fanti, M; Bertolucci, F; Maleev, V; Sultanov, S; Rosenberg, E I; Krumnack, N E; Bieganek, C; Diehl, E B; Mc kee, S P; Eppig, A P; Harper, D R; Liu, C; Schwarz, T A; Mazor, B; Looper, K A; Wiedenmann, W; Huang, P; Stahlman, J M; Battaglia, M; Nielsen, J A; Zhao, T; Khanov, A; Kaushik, V S; Vichou, E; Liss, A M; Gemme, C; Morettini, P; Parodi, F; Passaggio, S; Rossi, L; Kuzhir, P; Ignatenko, A; Ferrari, R; Spairani, M; Pianori, E; Sekula, S J; Firan, A I; Cao, T; Hetherly, J W; Gouighri, M; Vassilakopoulos, V; Long, M C; Shimojima, M; Sawyer, L H; Brummett, R E; Losada, M A; Schorlemmer, A L; Mantoani, M; Bawa, H S; Mornacchi, G; Nicquevert, B; Palestini, S; Stapnes, S; Veness, R; Kotamaki, M J; Sorde, C; Iengo, P; Campana, S; Goossens, L; Zajacova, Z; Pribyl, L; Poveda torres, J; Marzin, A; Conti, G; Carrillo montoya, G D; Kroseberg, J; Gonella, L; Velz, T; Schmitt, S; Lobodzinska, E M; Lovschall-jensen, A E; Galster, G; Perrot, G; Cailles, M; Berger, N; Barnovska, Z; Delsart, P; Lleres, A; Tisserant, S; Grivaz, J; Matricon, P; Bellagamba, L; Bertin, A; Bruschi, M; De castro, S; Semprini cesari, N; Fabbri, L; Rinaldi, L; Quayle, W B; Truong, T N L; Kondo, T; Haruyama, T; Ng, C; Do valle wemans, A; Almeida veloso, F M; Konovalov, S; Ziegler, J M; Su, D; Lukas, W; Prince, S; Ortega urrego, E J; Teuscher, R J; Knecht, N; Pretzl, K; Borer, C; Gadomski, S; Koch, B; Kuleshov, S; Brooks, W K; Antos, J; Kulkova, I; Chudoba, J; Chyla, J; Tomasek, L; Bazalova, M; Messmer, I; Tobias, J; Sundermann, J E; Kuehn, S S; Kluge, E; Scharf, V L; Barillari, T; Kluth, S; Menke, S; Weigell, P; Schwegler, P; Ziolkowski, M; Casado lechuga, P M; Garcia, C; Sanchez, J; Costa mezquita, M J; Valero biot, J A; Laporte, J; Nikolaidou, R; Virchaux, M; Nguyen, V T H; Charlton, D; Harrison, K; Slater, M W; Newman, P R; Parker, A M; Ward, P; Mcgarvie, S A; Kilvington, G J; D'auria, S; O'shea, V; Mcglone, H M; Fox, H; Henderson, R; Kartvelishvili, V; Davies, B; Sherwood, P; Fraser, J T; Lancaster, M A; Tseng, J C; Hays, C P; Apolle, R; Dixon, S D; Parker, K A; Gazis, E; Papadopoulou, T; Panagiotopoulou, E; Karastathis, N; Hershenhorn, A D; Milov, A; Groth-jensen, J; Bilokon, H; Miscetti, S; Canale, V; Rebuzzi, D M; Capua, M; Bagnaia, P; De salvo, A; Gentile, S; Safai tehrani, F; Solfaroli camillocci, E; Sasao, N; Tsunada, K; Massaro, G; Magrath, C A; Van kesteren, Z; Beker, M G; Van den wollenberg, W; Bugge, L; Buran, T; Read, A L; Gjelsten, B K; Banas, E A; Turnau, J; Derendarz, D K; Kisielewska, D; Chesneanu, D; Rotaru, M; Maurer, J B; Wong, M L; Lund-jensen, B; Asman, B; Jon-and, K B; Silverstein, S B; Johansen, M; Alexandrov, I; Iatsounenko, I; Krumshteyn, Z; Peshekhonov, V; Rybaltchenko, K; Samoylov, V; Cheplakov, A; Kekelidze, G; Lyablin, M; Teterine, V; Bednyakov, V; Kruchonak, U; Shiyakova, M M; Demichev, M; Denisov, S P; Fenyuk, A; Djobava, T; Salukvadze, G; Cetin, S A; Brau, B P; Pais, P R; Proudfoot, J; Van gemmeren, P; Zhang, Q; Beringer, J A; Ely, R; Leggett, C; Pengg, F X; Barnett, M R; Quick, R E; Williams, S; Gardner jr, R W; Huston, J; Brock, R; Wanotayaroj, C; Unel, G N; Taffard, A C; Frate, M; Baker, K O; Tipton, P L; Hutchison, A; Walsh, B J; Norberg, S R; Su, J; Tsybyshev, D; Caballero bejar, J; Ernst, M U; Wellenstein, H; Vudragovic, D; Vidic, I; Gorelov, I V; Toms, K; Alimonti, G; Petrucci, F; Kolanoski, H; Smith, J; Jeng, G; Watson, I J; Guimaraes ferreira, F; Miranda vieira xavier, F; Araujo pereira, R; Poffenberger, P; Sopko, V; Elmsheuser, J; Wittkowski, J; Glitza, K; Gorfine, G W; Ferrer soria, A; Fuster verdu, J A; Sanchis lozano, A; Reinmuth, G; Busato, E; Haywood, S J; Mcmahon, S J; Qian, W; Villani, E G; Laycock, P J; Poll, A J; Rizvi, E S; Foster, J M; Loebinger, F; Forti, A; Plano, W G; Brown, G J A; Kordas, K; Vegni, G; Ohsugi, T; Iwata, Y; Cherkaoui el moursli, R; Sahin, M; Akyazi, E; Carlsen, A; Kanwal, B; Cochran jr, J H; Aronnax, M V; Lockner, M J; Zhou, B; Levin, D S; Weaverdyck, C J; Grom, G F; Rudge, A; Ebenstein, W L; Jia, B; Yamaoka, J; Jared, R C; Wu, S L; Banerjee, S; Lu, Q; Hughes, E W; Alkire, S P; Degenhardt, J D; Lipeles, E D; Spencer, E N; Savine, A; Cheu, E C; Lampl, W; Veatch, J R; Roberts, K; Atkinson, M J; Odino, G A; Polesello, G; Martin, T; White, A P; Stephens, R; Grinbaum sarkisyan, E; Vartapetian, A; Yu, J; Sosebee, M; Thilagar, P A; Spurlock, B; Bonde, R; Filthaut, F; Klok, P; Hoummada, A; Ouchrif, M; Pellegrini, G; Rafi tatjer, J M; Navarro, G A; Blumenschein, U; Weingarten, J C; Mueller, D; Graber, L; Gao, Y; Bode, A; Capeans garrido, M D M; Carli, T; Wells, P; Beltramello, O; Vuillermet, R; Dudarev, A; Salzburger, A; Torchiani, C I; Serfon, C L G; Sloper, J E; Duperrier, G; Lilova, P T; Knecht, M O; Lassnig, M; Anders, G; Deviveiros, P; Young, C; Sforza, F; Shaochen, C; Lu, F; Wermes, N; Wienemann, P; Schwindt, T; Hansen, P H; Hansen, J B; Pingel, A M; Massol, N; Elles, S L; Hallewell, G D; Rozanov, A; Vacavant, L; Fournier, D A; Poggioli, L; Puzo, P M; Tanaka, R; Escalier, M A; Makovec, N; Rezynkina, K; De cecco, S; Cavalleri, P G; Massa, I; Zoccoli, A; Tanaka, S; Odaka, S; Mitsui, S; Tomasio pina, J A; Santos, H F; Satsounkevitch, I; Harkusha, S; Baranov, S; Nechaeva, P; Kayumov, F; Kazanin, V; Asai, M; Mount, R P; Nelson, T K; Smith, D; Kenney, C J; Malone, C M; Kobel, M; Friedrich, F; Grohs, J P; Jais, W J; O'neil, D C; Warburton, A T; Vincter, M; Mccarthy, T G; Groer, L S; Pham, Q T; Taylor, W J; La marra, D; Perrin, E; Wu, X; Bell, W H; Delitzsch, C M; Feng, C; Zhu, C; Tokar, S; Bruncko, D; Kupco, A; Marcisovsky, M; Jakoubek, T; Bruneliere, R; Aktas, A; Narrias villar, D I; Tapprogge, S; Mattmann, J; Kroha, H; Crespo, J; Korolkov, I; Cavallaro, E; Cabrera urban, S; Mitsou, V; Kozanecki, W; Mansoulie, B; Pabot, Y; Etienvre, A; Bauer, F; Chevallier, F; Bouty, A R; Watkins, P; Watson, A; Faulkner, P J W; Curtis, C J; Murillo quijada, J A; Grout, Z J; Chapman, J D; Cowan, G D; George, S; Boisvert, V; Mcmahon, T R; Doyle, A T; Thompson, S A; Britton, D; Smizanska, M; Campanelli, M; Butterworth, J M; Loken, J; Renton, P; Barr, A J; Issever, C; Short, D; Crispin ortuzar, M; Tovey, D R; French, R; Rozen, Y; Alexander, G; Kreisel, A; Conventi, F; Raulo, A; Schioppa, M; Susinno, G; Tassi, E; Giagu, S; Luci, C; Nisati, A; Cobal, M; Ishikawa, A; Jinnouchi, O; Bos, K; Verkerke, W; Vermeulen, J; Van vulpen, I B; Kieft, G; Mora, K D; Olsen, F; Rohne, O M; Pajchel, K; Nilsen, J K; Wosiek, B K; Wozniak, K W; Badescu, E; Jinaru, A; Bohm, C; Johansson, E K; Sjoelin, J B R; Clement, C; Buszello, C P; Huseynova, D; Boyko, I; Popov, B; Poukhov, O; Vinogradov, V; Tsiareshka, P; Skvorodnev, N; Soldatov, A; Chuguev, A; Gushchin, V; Yazici, E; Lutz, M S; Malon, D; Vanyashin, A; Lavrijsen, W; Spieler, H; Biesiada, J L; Bahr, M; Kong, J; Tatarkhanov, M; Ogren, H; Van kooten, R J; Cwetanski, P; Butler, J M; Shank, J T; Chakraborty, D; Ermoline, I; Sinev, N; Whiteson, D O; Corso radu, A; Huang, J; Werth, M P; Kastoryano, M; Meirose da silva costa, B; Namasivayam, H; Hobbs, J D; Schamberger jr, R D; Guo, F; Potekhin, M; Popovic, D; Gorisek, A; Sokhrannyi, G; Hofsajer, I W; Mandelli, L; Ceradini, F; Graziani, E; Giorgi, F; Zur nedden, M E G; Grancagnolo, S; Volpi, M; Nunes hanninger, G; Rados, P K; Milesi, M; Cuthbert, C J; Black, C W; Fink grael, F; Fincke-keeler, M; Keeler, R; Kowalewski, R V; Berghaus, F O; Qi, M; Davidek, T; Tas, P; Jakubek, J; Duckeck, G; Walker, R; Mitterer, C A; Harenberg, T; Sandvoss, S A; Del peso, J; Llorente merino, J; Gonzalez millan, V; Irles quiles, A; Crouau, M; Gris, P L Y; Liauzu, S; Romano saez, S M; Gallop, B J; Jones, T J; Austin, N C; Morris, J; Duerdoth, I; Thompson, R J; Kelly, M P; Leisos, A; Garas, A; Pizio, C; Venda pinto, B A; Kudin, L; Qian, J; Wilson, A W; Mietlicki, D; Long, J D; Sang, Z; Arms, K E; Rahimi, A M; Moss, J J; Oh, S H; Parker, S I; Parsons, J; Cunitz, H; Vanguri, R S; Sadrozinski, H; Lockman, W S; Martinez-mc kinney, G; Goussiou, A; Jones, A; Lie, K; Hasegawa, Y; Olcese, M; Gilewsky, V; Harrison, P F; Janus, M; Spangenberg, M; De, K; Ozturk, N; Pal, A K; Darmora, S; Bullock, D J; Oviawe, O; Derkaoui, J E; Rahal, G; Sircar, A; Frey, A S; Stolte, P; Rosien, N; Zoch, K; Li, L; Schouten, D W; Catinaccio, A; Ciapetti, M; Delruelle, N; Ellis, N; Farthouat, P; Hoecker, A; Klioutchnikova, T; Macina, D; Malyukov, S; Spiwoks, R D; Unal, G P; Vandoni, G; Petersen, B A; Pommes, K; Nairz, A M; Wengler, T; Mladenov, D; Solans sanchez, C A; Lantzsch, K; Schmieden, K; Jakobsen, S; Ritsch, E; Sciuccati, A; Alves dos santos, A M; Ouyang, Q; Zhou, M; Brock, I C; Janssen, J; Katzy, J; Anders, C F; Nilsson, B S; Bazan, A; Di ciaccio, L; Yildizkaya, T; Collot, J; Malek, F; Trocme, B S; Breugnon, P; Godiot, S; Adam bourdarios, C; Coulon, J; Duflot, L; Petroff, P G; Zerwas, D; Lieuvin, M; Calderini, G; Laporte, D; Ocariz, J; Gabrielli, A; Ohska, T K; Kurochkin, Y; Kantserov, V; Vasilyeva, L; Speransky, M; Smirnov, S; Antonov, A; Bulekov, O; Tikhonov, Y; Sargsyan, L; Vardanyan, G; Budick, B; Kocian, M L; Luitz, S; Young, C C; Grenier, P J; Kelsey, M; Black, J E; Kneringer, E; Jussel, P; Horton, A J; Beaudry, J; Chandra, A; Ereditato, A; Topfel, C M; Mathieu, R; Bucci, F; Muenstermann, D; White, R M; He, M; Urban, J; Straka, M; Vrba, V; Schumacher, M; Parzefall, U; Mahboubi, K; Sommer, P O; Koepke, L H; Bethke, S; Moser, H; Wiesmann, M; Walkowiak, W A; Fleck, I J; Martinez-perez, M; Sanchez sanchez, C A; Jorgensen roca, S; Accion garcia, E; Sainz ruiz, C A; Valls ferrer, J A; Amoros vicente, G; Vives torrescasana, R; Ouraou, A; Formica, A; Hassani, S; Watson, M F; Cottin buracchio, G F; Bussey, P J; Saxon, D; Ferrando, J E; Collins-tooth, C L; Hall, D C; Cuhadar donszelmann, T; Dawson, I; Duxfield, R; Argyropoulos, T; Brodet, E; Livneh, R; Shougaev, K; Reinherz, E I; Guttman, N; Beretta, M M; Vilucchi, E; Aloisio, A; Patricelli, S; Caprio, M; Cevenini, F; De vecchi, C; Livan, M; Rimoldi, A; Vercesi, V; Ayad, R; Mastroberardino, A; Ciapetti, G; Luminari, L; Rescigno, M; Santonico, R; Salamon, A; Del papa, C; Kurashige, H; Homma, Y; Tomoto, M; Horii, Y; Sugaya, Y; Hanagaki, K; Bobbink, G; Kluit, P M; Koffeman, E N; Van eijk, B; Lee, H; Eigen, G; Dorholt, O; Strandlie, A; Strzempek, P B; Dita, S; Stoicea, G; Chitan, A; Leven, S S; Moa, T; Brenner, R; Ekelof, T J C; Olshevskiy, A; Roumiantsev, V; Chlachidze, G; Zimine, N; Gusakov, Y; Grigalashvili, N; Mineev, M; Potrap, I; Barashkou, A; Shoukavy, D; Shaykhatdenov, B; Pikelner, A; Gladilin, L; Ammosov, V; Abramov, A; Arik, M; Sahinsoy, M; Uysal, Z; Azizi, K; Hotinli, S C; Zhou, S; Berger, E; Blair, R; Underwood, D G; Einsweiler, K; Garcia-sciveres, M A; Siegrist, J L; Kipnis, I; Dahl, O; Holland, S; Barbaro galtieri, A; Smith, P T; Parua, N; Franklin, M; Mercurio, K M; Tong, B; Pod, E; Cole, S G; Hopkins, W H; Guest, D H; Severini, H; Marsicano, J J; Abbott, B K; Wang, Q; Lissauer, D; Ma, H; Takai, H; Rajagopalan, S; Protopopescu, S D; Snyder, S S; Undrus, A; Popescu, R N; Begel, M A; Blocker, C A; Amelung, C; Mandic, I; Macek, B; Tucker, B H; Citterio, M; Troncon, C; Orestano, D; Taccini, C; Romeo, G L; Dova, M T; Taylor, G N; Gesualdi manhaes, A; Mcpherson, R A; Sobie, R; Taylor, R P; Dolezal, Z; Kodys, P; Slovak, R; Sopko, B; Vacek, V; Sanders, M P; Hertenberger, R; Meineck, C; Becks, K; Kind, P; Sandhoff, M; Cantero garcia, J; De la torre perez, H; Castillo gimenez, V; Ros, E; Hernandez jimenez, Y; Chadelas, R; Santoni, C; Washbrook, A J; O'brien, B J; Wynne, B M; Mehta, A; Vossebeld, J H; Landon, M; Teixeira dias castanheira, M; Cerrito, L; Keates, J R; Fassouliotis, D; Chardalas, M; Manousos, A; Grachev, V; Seliverstov, D; Sedykh, E; Cakir, O; Ciftci, R; Edson, W; Prell, S A; Rosati, M; Stroman, T; Jiang, H; Neal, H A; Li, X; Gan, K K; Smith, D S; Kruse, M C; Ko, B R; Leung fook cheong, A M; Cole, B; Angerami, A R; Greene, Z S; Kroll, J I; Van berg, R P; Forbush, D A; Lubatti, H; Raisher, J; Shupe, M A; Wolin, S; Oshita, H; Gaudio, G; Das, R; Konig, A C; Croft, V A; Harvey, A; Maaroufi, F; Melo, I; Greenwood jr, Z D; Shabalina, E; Mchedlidze, G; Drechsler, E; Rieger, J K; Blackston, M; Colombo, T

    2002-01-01

    % ATLAS \\\\ \\\\ ATLAS is a general-purpose experiment for recording proton-proton collisions at LHC. The ATLAS collaboration consists of 144 participating institutions (June 1998) with more than 1750~physicists and engineers (700 from non-Member States). The detector design has been optimized to cover the largest possible range of LHC physics: searches for Higgs bosons and alternative schemes for the spontaneous symmetry-breaking mechanism; searches for supersymmetric particles, new gauge bosons, leptoquarks, and quark and lepton compositeness indicating extensions to the Standard Model and new physics beyond it; studies of the origin of CP violation via high-precision measurements of CP-violating B-decays; high-precision measurements of the third quark family such as the top-quark mass and decay properties, rare decays of B-hadrons, spectroscopy of rare B-hadrons, and $ B ^0 _{s} $-mixing. \\\\ \\\\The ATLAS dectector, shown in the Figure includes an inner tracking detector inside a 2~T~solenoid providing an axial...

  6. ATLAS DDM/DQ2 & NoSQL databases: Use cases and experiences

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    NoSQL databases. This includes distributed file system like HDFS that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value/document stores, like HBase, Cassandra or MongoDB. These databases provide solutions to particular types...

  7. Supporting ATLAS

    CERN Multimedia

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator. The installation of the feet is scheduled to finish during January 2004 with an installation precision at the 1 mm level despite their height of 5.3 metres. The manufacture was carried out in Russia (Company Izhorskiye Zavody in St. Petersburg), as part of a Russian and JINR Dubna in-kind contribution to ATLAS. Involved in the installation is a team from IHEP-Protvino (Russia), the ATLAS technical co-ordination team at CERN, and the CERN survey team. In all, about 15 people are involved. After the feet are in place, the barrel toroid magnet and the barrel calorimeters will be installed. This will keep the ATLAS team busy for the entire year 2004.

  8. First steps towards a European atlas of natural radiation: status of the European indoor radon map

    International Nuclear Information System (INIS)

    Dubois, G.; Bossew, P.; Tollefsen, T.; De Cort, M.

    2010-01-01

    Within the context of its institutional scientific support to the European Commission, in 2005 the Radioactivity Environmental Monitoring (REM) group at the Joint Research Centre of the European Commission, started to explore the possibility of mapping indoor radon in European houses as a first step towards preparing a European Atlas of Natural Radiations. The main objective of such an atlas is to contribute to familiarizing the public with its naturally radioactive environment. The process of preparing the atlas should also provide the scientific community with a database of information that can be used for further studies and for highlighting regions with elevated levels of natural radiation. This document presents the status of the European indoor radon (Rn) map, first statistical results, and outlines of forthcoming challenges.

  9. 17 April 2008 - Head of Internal Audit Network meeting visiting the ATLAS experimental area with CERN ATLAS Team Leader P. Fassnacht, ATLAS Technical Coordinator M. Nessi and ATLAS Resources Manager M. Nordberg.

    CERN Multimedia

    Mona Schweizer

    2008-01-01

    17 April 2008 - Head of Internal Audit Network meeting visiting the ATLAS experimental area with CERN ATLAS Team Leader P. Fassnacht, ATLAS Technical Coordinator M. Nessi and ATLAS Resources Manager M. Nordberg.

  10. First-year experience with the ATLAS online monitoring framework

    International Nuclear Information System (INIS)

    Corso-Radu, A

    2010-01-01

    ATLAS is one of the four experiments in the Large Hadron Collider (LHC) at CERN, which has been put in operation this year. The challenging experimental environment and the extreme detector complexity required development of a highly scalable distributed monitoring framework, which is currently being used to monitor the quality of the data being taken as well as operational conditions of the hardware and software elements of the detector, trigger and data acquisition systems. At the moment the ATLAS Trigger/DAQ system is distributed over more than 1000 computers, which is about one third of the final ATLAS size. At every minute of an ATLAS data taking session the monitoring framework serves several thousands physics events to monitoring data analysis applications, handles more than 4 million histograms updates coming from more than 4 thousands applications, executes 10 thousands advanced data quality checks for a subset of those histograms, displays histograms and results of these checks on several dozens of monitors installed in main and satellite ATLAS control rooms. This note presents the overview of the online monitoring software framework, and describes the experience, which was gained during an extensive commissioning period as well as at the first phase of LHC beam in September 2008. Performance results, obtained on the current ATLAS DAQ system will also be presented, showing that the performance of the framework is adequate for the final ATLAS system.

  11. The ATLAS PanDA Monitoring System and its Evolution

    Science.gov (United States)

    Klimentov, A.; Nevski, P.; Potekhin, M.; Wenaus, T.

    2011-12-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.

  12. The ATLAS PanDA Monitoring System and its Evolution

    International Nuclear Information System (INIS)

    Klimentov, A; Nevski, P; Wenaus, T; Potekhin, M

    2011-01-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.

  13. The ATLAS Silicon Microstrip Tracker

    CERN Document Server

    Haefner, Petra

    2010-01-01

    In December 2009 the ATLAS experiment at the CERN Large Hadron Collider (LHC) recorded the first proton-proton collisions at a centre-of-mass energy of 900 GeV. This was followed by collisions at the unprecedented energy of 7 TeV in March 2010. The SemiConductor Tracker (SCT) is a precision tracking device in ATLAS made up from silicon micro-strip detectors processed in the planar p-in-n technology. The signal from the strips is processed in the front-end ASICs working in binary readout mode. Data is transferred to the off-detector readout electronics via optical fibers. The completed SCT has been installed inside the ATLAS experiment. Since then the detector was operated for two years under realistic conditions. Calibration data has been taken and analysed to determine the performance of the system. In addition, extensive commissioning with cosmic ray events has been performed both with and without magnetic field. The sensor behaviour in magnetic field was studied by measurements of the Lorentz angle. After ...

  14. Advanced Alignment of the ATLAS Tracking System

    CERN Document Server

    Pedraza Lopez, S; The ATLAS collaboration

    2012-01-01

    In order to reconstruct trajectories of charged particles, ATLAS is equipped with a tracking system built using different technologies embedded in a 2T solenoidal magnetic field. ATLAS physics goals require high resolution, unbiased measurement of all charged particle kinematic parameters in order to assure accurate invariant mass reconstruction and interaction and decay vertex finding. These critically depend on the systematic effects related to the alignment of the tracking system. In order to eliminate malicious systematic deformations, various advanced tools and techniques have been put in place. These include information from known mass resonances, energy of electrons and positrons measured by the electromagnetic calorimeters, etc. Despite being stable under normal running conditions, ATLAS tracking system responses to sudden environ-mental changes (temperature, magnetic field) by small collective deformations. These have to be identified and corrected in order to assure uniform, highest quality tracking...

  15. Probabilistic liver atlas construction.

    Science.gov (United States)

    Dura, Esther; Domingo, Juan; Ayala, Guillermo; Marti-Bonmati, Luis; Goceri, E

    2017-01-13

    Anatomical atlases are 3D volumes or shapes representing an organ or structure of the human body. They contain either the prototypical shape of the object of interest together with other shapes representing its statistical variations (statistical atlas) or a probability map of belonging to the object (probabilistic atlas). Probabilistic atlases are mostly built with simple estimations only involving the data at each spatial location. A new method for probabilistic atlas construction that uses a generalized linear model is proposed. This method aims to improve the estimation of the probability to be covered by the liver. Furthermore, all methods to build an atlas involve previous coregistration of the sample of shapes available. The influence of the geometrical transformation adopted for registration in the quality of the final atlas has not been sufficiently investigated. The ability of an atlas to adapt to a new case is one of the most important quality criteria that should be taken into account. The presented experiments show that some methods for atlas construction are severely affected by the previous coregistration step. We show the good performance of the new approach. Furthermore, results suggest that extremely flexible registration methods are not always beneficial, since they can reduce the variability of the atlas and hence its ability to give sensible values of probability when used as an aid in segmentation of new cases.

  16. Nerve Surgeons' Assessment of the Role of Eduard Pernkopf's Atlas of Topographic and Applied Human Anatomy in Surgical Practice.

    Science.gov (United States)

    Yee, Andrew; Coombs, Demetrius M; Hildebrandt, Sabine; Seidelman, William E; Coert, J Henk; Mackinnon, Susan E

    2018-05-08

    Pernkopf's atlas of Anatomy contains anatomical plates with detailed images of the peripheral nerves. Its use is controversial due to the author's association with the "Third Reich" and the potential depiction of victims of the Holocaust. The ethical implications of using this atlas for informing surgical planning have not been assessed. To (1) assess the role of Pernkopf's atlas in nerve surgeons' current practice and (2) determine whether a proposal for its ethical handling may provide possible guidance for use in surgery and surgical education. Members of American Society for Peripheral Nerve and PASSIO Education (video-based learning platform) were surveyed and 182 responses collected. The survey introduced the historical origin of Pernkopf's atlas, and respondents were asked whether they would use the atlas under specific conditions to serve as a recommendation for its ethical handling. An anatomical plate comparison between Netter's and Pernkopf's atlases was performed to compare anatomical accuracy and surgical utility. Fifty-nine percent of respondents were aware of Pernkopf's atlas, with 13% currently using it. Aware of the historical facts, 69% were comfortable using the atlas, 15% uncomfortable, and 17% undecided. Additional information on conditions for an ethical approach to the use of the atlas led 76% of those "uncomfortable" and "undecided" to becoming "comfortable" with use. While the use of Pernkopf's atlas remains controversial, a proposal detailing conditions for an ethical approach in its use provides new guidance in surgical planning and education.

  17. Historical land use databases: a new layer of information for geographical research

    NARCIS (Netherlands)

    Kramer, H.; Mücher, C.A.; Hazeu, G.W.

    2011-01-01

    In this paper we describe how historical land use information has been derived for the whole of Europe, using the World Atlas of Agriculture, scale 1: 2,500,000. This paper describes the process of converting the analog land-use maps to a digital European historical land-use database, the Historical

  18. Report to users of ATLAS

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1995-05-01

    This report contains discussing in the following areas: Status of the Atlas accelerator; highlights of recent research at Atlas; concept for an advanced exotic beam facility based on Atlas; program advisory committee; Atlas executive committee; and Atlas and ANL physics division on the world wide web

  19. Evolution of the Architecture of the ATLAS Metadata Interface (AMI)

    CERN Document Server

    Odier, Jerome; The ATLAS collaboration; Fulachier, Jerome; Lambert, Fabian

    2015-01-01

    The ATLAS Metadata Interface (AMI) is now a mature application. Over the years, the number of users and the number of provided functions has dramatically increased. It is necessary to adapt the hardware infrastructure in a seamless way so that the quality of service remains high. We describe the evolution from the beginning of the application life, using one server with a MySQL backend database, to the current state in which a cluster of virtual machines on the French Tier 1 cloud at Lyon, an Oracle database also at Lyon, with replication to Oracle at CERN and a back-up server are used.

  20. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  1. Parcellation of the Healthy Neonatal Brain into 107 Regions Using Atlas Propagation through Intermediate Time Points in Childhood.

    Science.gov (United States)

    Blesa, Manuel; Serag, Ahmed; Wilkinson, Alastair G; Anblagan, Devasuda; Telford, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Macnaught, Gillian; Semple, Scott I; Bastin, Mark E; Boardman, James P

    2016-01-01

    Neuroimage analysis pipelines rely on parcellated atlases generated from healthy individuals to provide anatomic context to structural and diffusion MRI data. Atlases constructed using adult data introduce bias into studies of early brain development. We aimed to create a neonatal brain atlas of healthy subjects that can be applied to multi-modal MRI data. Structural and diffusion 3T MRI scans were acquired soon after birth from 33 typically developing neonates born at term (mean postmenstrual age at birth 39(+5) weeks, range 37(+2)-41(+6)). An adult brain atlas (SRI24/TZO) was propagated to the neonatal data using temporal registration via childhood templates with dense temporal samples (NIH Pediatric Database), with the final atlas (Edinburgh Neonatal Atlas, ENA33) constructed using the Symmetric Group Normalization (SyGN) method. After this step, the computed final transformations were applied to T2-weighted data, and fractional anisotropy, mean diffusivity, and tissue segmentations to provide a multi-modal atlas with 107 anatomical regions; a symmetric version was also created to facilitate studies of laterality. Volumes of each region of interest were measured to provide reference data from normal subjects. Because this atlas is generated from step-wise propagation of adult labels through intermediate time points in childhood, it may serve as a useful starting point for modeling brain growth during development.

  2. Parcellation of the healthy neonatal brain into 107 regions using atlas propagation through intermediate time points in childhood

    Directory of Open Access Journals (Sweden)

    Manuel eBlesa Cabez

    2016-05-01

    Full Text Available Neuroimage analysis pipelines rely on parcellated atlases generated from healthy individuals to provide anatomic context to structural and diffusion MRI data. Atlases constructed using adult data introduce bias into studies of early brain development. We aimed to create a neonatal brain atlas of healthy subjects that can be applied to multi-modal MRI data. Structural and diffusion 3T MRI scans were acquired soon after birth from 33 typically developing neonates born at term (mean postmenstrual age at birth 39+5 weeks, range 37+2-41+6. An adult brain atlas (SRI24/TZO was propagated to the neonatal data using temporal registration via childhood templates with dense temporal samples (NIH Pediatric Database, with the final atlas (Edinburgh Neonatal Atlas, ENA33 constructed using the Symmetric Group Normalization method. After this step, the computed final transformations were applied to T2-weighted data, and fractional anisotropy, mean diffusivity, and tissue segmentations to provide a multi-modal atlas with 107 anatomical regions; a symmetric version was also created to facilitate studies of laterality. Volumes of each region of interest were measured to provide reference data from normal subjects. Because this atlas is generated from step-wise propagation of adult labels through intermediate time points in childhood, it may serve as a useful starting point for modelling brain growth during development.

  3. LHCb: LHCb Software and Conditions Database Cross-Compatibility Tracking: a Graph-Theory Approach

    CERN Multimedia

    Cattaneo, M; Shapoval, I

    2012-01-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data or all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that compatibility between a CondDB state and LHCb application state may not be preserved across different database and application generations. More over, a CondDB state by itself belongs to a complex three-dimensional phase space which evolves according to certain CondDB self-compatibility criteria, so it is sometimes difficult even to determine a self-consistent CondDB state. These compatibility issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. Thus, there is a need for defining a well-established set of compatibility criteria between mentioned above entities, together with developing a compatibil...

  4. LHCb Software and Conditions Database Cross-Compatibility Tracking System: a Graph-Theory Approach

    CERN Document Server

    Cattaneo, M; Shapoval, I

    2012-01-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data or all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that compatibility between a CondDB state and LHCb application state may not be preserved across different database and application generations. More over, a CondDB state by itself belongs to a complex three-dimensional phase space which evolves according to certain CondDB self-compatibility criteria, so it is sometimes difficult even to determine a self-consistent CondDB state. These compatibility issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. Thus, there is a need for defining a well-established set of compatibility criteria between mentioned above entities, together with developing a compatibil...

  5. Evolution of Database Replication Technologies for WLCG

    OpenAIRE

    Baranowski, Zbigniew; Pardavila, Lorena Lobato; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-01-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 databas...

  6. ATLAS-AWS

    International Nuclear Information System (INIS)

    Gehrcke, Jan-Philip; Stonjek, Stefan; Kluth, Stefan

    2010-01-01

    We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image on the Amazon Simple Storage Service (S3) and can be quickly retrieved and connected to new SL4 AMI instances using the Amazon Elastic Block Store (EBS). ATLAS jobs can then configure against the release kit using the ATLAS configuration management tool (cmt) in the standard way. The output of jobs is exported to S3 before the SL4 AMI is terminated. Job status information is transferred to the Amazon SimpleDB service. The whole process of launching instances of our AMI, starting, monitoring and stopping jobs and retrieving job output from S3 is controlled from a client machine using python scripts implementing the Amazon EC2/S3 API via the boto library working together with small scripts embedded in the SL4 AMI. We report our experience with setting up and operating the system using standard ATLAS job transforms.

  7. EnviroAtlas

    Data.gov (United States)

    City and County of Durham, North Carolina — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web...

  8. The Error Reporting in the ATLAS TDAQ system

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Papaevgeniou, L

    2014-01-01

    The ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the Error Reporting service as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When applications send information to ERS, depending on the actual configuration the information may end up in a local file, in a database, in distributed middle-ware, which can transport it to an expert system or dis...

  9. The Error Reporting in the ATLAS TDAQ System

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Papaevgeniou, L

    2015-01-01

    The ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the Error Reporting service as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When applications send information to ERS, depending on the actual configuration the information may end up in a local file, in a database, in distributed middle-ware, which can transport it to an expert system or dis...

  10. ATLAS EventIndex General Dataflow and Monitoring Infrastructure

    CERN Document Server

    Barberis, Dario; The ATLAS collaboration

    2016-01-01

    The ATLAS EventIndex has been running in production since mid-2015, reliably collecting information worldwide about all produced events and storing them in a central Hadoop infrastructure at CERN. A subset of this information is copied to an Oracle relational database for fast datasets discovery, event-picking, crosschecks with other ATLAS systems and checks for event duplication. The system design and its optimization is serving event picking from requests of a few events up to scales of tens of thousand of events, and in addition, data consistency checks are performed for large production campaigns. Detecting duplicate events with a scope of physics collections has recently arisen as an important use case. This paper describes the general architecture of the project and the data flow and operation issues, which are addressed by recent developments to improve the throughput of the overall system. In this direction, the data collection system is reducing the usage of the messaging infrastructure to overcome t...

  11. ATLAS EventIndex general dataflow and monitoring infrastructure

    CERN Document Server

    AUTHOR|(SzGeCERN)638886; The ATLAS collaboration; Barberis, Dario; Favareto, Andrea; Garcia Montoro, Carlos; Gonzalez de la Hoz, Santiago; Hrivnac, Julius; Prokoshin, Fedor; Salt, Jose; Sanchez, Javier; Toebbicke, Rainer; Yuan, Ruijun

    2017-01-01

    The ATLAS EventIndex has been running in production since mid-2015, reliably collecting information worldwide about all produced events and storing them in a central Hadoop infrastructure at CERN. A subset of this information is copied to an Oracle relational database for fast dataset discovery, event-picking, crosschecks with other ATLAS systems and checks for event duplication. The system design and its optimization is serving event picking from requests of a few events up to scales of tens of thousand of events, and in addition, data consistency checks are performed for large production campaigns. Detecting duplicate events with a scope of physics collections has recently arisen as an important use case. This paper describes the general architecture of the project and the data flow and operation issues, which are addressed by recent developments to improve the throughput of the overall system. In this direction, the data collection system is reducing the usage of the messaging infrastructure to overcome th...

  12. Dear ATLAS colleagues,

    CERN Multimedia

    PH Department

    2008-01-01

    We are collecting old pairs of glasses to take out to Mali, where they can be re-used by people there. The price for a pair of glasses can often exceed 3 months salary, so they are prohibitively expensive for many people. If you have any old spectacles you can donate, please put them in the special box in the ATLAS secretariat, bldg.40-4-D01 before the Christmas closure on 19 December so we can take them with us when we leave for Africa at the end of the month. (more details in ATLAS e-news edition of 29 September 2008: http://atlas-service-enews.web.cern.ch/atlas-service-enews/news/news_mali.php) many thanks! Katharine Leney co-driver of the ATLAS car on the Charity Run to Mali

  13. Transcriptome database resource and gene expression atlas for the rose

    Science.gov (United States)

    2012-01-01

    Background For centuries roses have been selected based on a number of traits. Little information exists on the genetic and molecular basis that contributes to these traits, mainly because information on expressed genes for this economically important ornamental plant is scarce. Results Here, we used a combination of Illumina and 454 sequencing technologies to generate information on Rosa sp. transcripts using RNA from various tissues and in response to biotic and abiotic stresses. A total of 80714 transcript clusters were identified and 76611 peptides have been predicted among which 20997 have been clustered into 13900 protein families. BLASTp hits in closely related Rosaceae species revealed that about half of the predicted peptides in the strawberry and peach genomes have orthologs in Rosa dataset. Digital expression was obtained using RNA samples from organs at different development stages and under different stress conditions. qPCR validated the digital expression data for a selection of 23 genes with high or low expression levels. Comparative gene expression analyses between the different tissues and organs allowed the identification of clusters that are highly enriched in given tissues or under particular conditions, demonstrating the usefulness of the digital gene expression analysis. A web interface ROSAseq was created that allows data interrogation by BLAST, subsequent analysis of DNA clusters and access to thorough transcript annotation including best BLAST matches on Fragaria vesca, Prunus persica and Arabidopsis. The rose peptides dataset was used to create the ROSAcyc resource pathway database that allows access to the putative genes and enzymatic pathways. Conclusions The study provides useful information on Rosa expressed genes, with thorough annotation and an overview of expression patterns for transcripts with good accuracy. PMID:23164410

  14. ATLAS Distributed Computing Operations: Experience and improvements after 2 full years of data-taking

    International Nuclear Information System (INIS)

    Jézéquel, S; Stewart, G

    2012-01-01

    This paper summarizes operational experience and improvements in ATLAS computing infrastructure in 2010 and 2011. ATLAS has had 2 periods of data taking, with many more events recorded in 2011 than in 2010. It ran 3 major reprocessing campaigns. The activity in 2011 was similar to 2010, but scalability issues had to be addressed due to the increase in luminosity and trigger rate. Based on improved monitoring of ATLAS Grid computing, the evolution of computing activities (data/group production, their distribution and grid analysis) over time is presented. The main changes in the implementation of the computing model that will be shown are: the optimization of data distribution over the Grid, according to effective transfer rate and site readiness for analysis; the progressive dismantling of the cloud model, for data distribution and data processing; software installation migration to cvmfs; changing database access to a Frontier/squid infrastructure.

  15. Recent ATLAS Articles on WLAP

    CERN Multimedia

    Goldfarb, S

    2005-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Software Week Plenary 6-10 December 2004 North American ATLAS Physics Workshop (Tucson) 20-21 December 2004 (17 talks) Physics Analysis Tools Tutorial (Tucson) 19 December 2004 Full Chain Tutorial 21 September 2004 ATLAS Plenary Sessions, 17-18 February 2005 (17 talks) Coming soon: ATLAS Tutorial on Electroweak Physics, 14 Feb. 2005 Software Workshop, 21-22 February 2005 Click here to browse WLAP for all ATLAS lectures.

  16. Multi-atlas pancreas segmentation: Atlas selection based on vessel structure.

    Science.gov (United States)

    Karasawa, Ken'ichi; Oda, Masahiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Chu, Chengwen; Zheng, Guoyan; Rueckert, Daniel; Mori, Kensaku

    2017-07-01

    Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. ATLAS people can run!

    CERN Multimedia

    Claudia Marcelloni de Oliveira; Pauline Gagnon

    It must be all the training we are getting every day, running around trying to get everything ready for the start of the LHC next year. This year, the ATLAS runners were in fine form and came in force. Nine ATLAS teams signed up for the 37th Annual CERN Relay Race with six runners per team. Under a blasting sun on Wednesday 23rd May 2007, each team covered the distances of 1000m, 800m, 800m, 500m, 500m and 300m taking the runners around the whole Meyrin site, hills included. A small reception took place in the ATLAS secretariat a week later to award the ATLAS Cup to the best ATLAS team. For the details on this complex calculation which takes into account the age of each runner, their gender and the color of their shoes, see the July 2006 issue of ATLAS e-news. The ATLAS Running Athena Team, the only all-women team enrolled this year, won the much coveted ATLAS Cup for the second year in a row. In fact, they are so good that Peter Schmid and Patrick Fassnacht are wondering about reducing the women's bonus in...

  18. New Physics at HL-LHC with ATLAS

    CERN Document Server

    Rosten, Rachel; The ATLAS collaboration

    2018-01-01

    The prospects for new physics at the luminosity upgrade of LHC, HL-LHC, with a data set equivalent to 3000 fb-1, simulated in the upgrade ATLAS detector, are presented and discussed. Benchmark studies are presented to show how the sensitivity improves at the future high-luminosity LHC runs. Prospects for searches for new heavy bosons and dark matter candidates at 14 TeV pp collisions are explored, as well as the sensitivity of searches for anomalous top decays. For all these studies, a parameterised simulation of the upgraded ATLAS detector response is used, taking into account the expected pileup conditions.

  19. High-performance scalable Information Service for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Hauser, R

    2012-01-01

    The ATLAS experiment is being operated by highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to access the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS TDAQ project. The IS provides high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about hundred gigabytes of information which is being constantly updated with the update interval varying from a second to few tens of seconds. IS ...

  20. Recent ATLAS Articles on WLAP

    CERN Multimedia

    J. Herr

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Physics Workshop 6-11 June 2005 June 2005 ATLAS Week Plenary Session Click here to browse WLAP for all ATLAS lectures.

  1. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    Science.gov (United States)

    Campana, S.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  2. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    International Nuclear Information System (INIS)

    Campana, S

    2014-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R and D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  3. HappyFace-progress and future development for the ATLAS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kawamura, Gen; Magradze, Erekle; Musheghyan, Haykuhi; Nadal, Jordi; Quadt, Arnulf; Rzehorz, Gerhard [II. Physikalisches Institut, Georg-August-Universitat (Germany); Collaboration: ATLAS-Collaboration

    2015-07-01

    Nowadays, the HappyFace project aggregates, processes and stores information from different grid monitoring resources as well as from the grid system itself into the common database and displays status information through a single interface. The new implementation and architecture of HappyFace, the so-called grid-enabled HappyFace, provides direct access to the grid infrastructure. Different grid-enabled modules, to view datasets of the ATLAS Distributed Data Management system (DDM), to connect to the Ganga job monitoring system and to check the performance of grid transfers among the grid sites have been implemented. The new HappyFace system has been successfully integrated. It now displays the information and the status of both the monitoring resources and the direct access to the grid user applications and the grid collective services in the ATLAS computing system.

  4. New format for ATLAS e-news

    CERN Multimedia

    Pauline Gagnon

    ATLAS e-news got a new look! As of November 30, 2007, we have a new format for ATLAS e-news. Please go to: http://atlas-service-enews.web.cern.ch/atlas-service-enews/index.html . ATLAS e-news will now be published on a weekly basis. If you are not an ATLAS colaboration member but still want to know how the ATLAS experiment is doing, we will soon have a version of ATLAS e-news intended for the general public. Information will be sent out in due time.

  5. Structured storage in ATLAS Distributed Data Management: use cases and experiences

    International Nuclear Information System (INIS)

    Lassnig, Mario; Garonne, Vincent; Beermann, Thomas; Dimitrov, Gancho; Canali, Luca; Molfetas, Angelos; Zang Donal; Azzurra Chinzer, Lisa

    2012-01-01

    The distributed data management system of the high-energy physics experiment ATLAS has a critical dependency on the Oracle Relational Database Management System. Recently however, the increased appearance of data warehouselike workload in the experiment has put considerable and increasing strain on the Oracle database. In particular, the analysis of archived data, and the aggregation of data for summary purposes has been especially demanding. For this reason, structured storage systems were evaluated to offload the Oracle database, and to handle processing of data in a non-transactional way. This includes distributed file systems like HDFS that support parallel execution of computational tasks on distributed data, as well as non-relational databases like HBase, Cassandra, or MongoDB. In this paper, the most important analysis and aggregation use cases of the data management system are presented, and how structured storage systems were established to process them.

  6. Commissioning of the ATLAS Inner Detector with cosmic rays

    CERN Document Server

    Klinkby, E

    2008-01-01

    The tracking of the ATLAS experiment is performed by the Inner Detector which has recently been installed in its final position. Various parts of the detector have been commissioned using cosmic rays both on the surface and in the ATLAS pit. The different calibration, alignment and monitoring methods have been tested as well as the handling of the conditions data. Both real and simulated cosmic events are reconstructed using the full ATLAS software chain, with only minor modifications to account for the lack of timing of cosmics events, the lack of magnetic field and to remove any vertex requirements in the track fitters. Results so far show that the Inner Detector performs within expectations with respect to noise, hit efficiency and track resolution.

  7. ATLAS Virtual Visits bringing the world into the ATLAS control room

    CERN Document Server

    AUTHOR|(CDS)2051192; The ATLAS collaboration; Yacoob, Sahal

    2016-01-01

    ATLAS Virtual Visits is a project initiated in 2011 for the Education & Outreach program of the ATLAS Experiment at CERN. Its goal is to promote public appreciation of the LHC physics program and particle physics, in general, through direct dialogue between ATLAS physicists and remote audiences. A Virtual Visit is an IP-based videoconference, coupled with a public webcast and video recording, between ATLAS physicists and remote locations around the world, that typically include high school or university classrooms, Masterclasses, science fairs, or other special events, usually hosted by collaboration members. Over the past two years, more than 10,000 people, from all of the world’s continents, have actively participated in ATLAS Virtual Visits, with many more enjoying the experience from the publicly available webcasts and recordings. We present an overview of our experience and discuss potential development for the future.

  8. The ATLAS DDM Tracer monitoring framework

    CERN Document Server

    ZANG, D; The ATLAS collaboration; BARISITS, M; LASSNIG, M; Andrew STEWART, G; MOLFETAS, A; BEERMANN, T

    2012-01-01

    The DDM Tracer Service is aimed to trace and monitor the atlas file operations on the Worldwide LHC Computing Grid. The volume of traces has increased significantly since the service started in 2009. Now there are about ~5 million trace messages every day and peaks of greater than 250Hz, with peak rates continuing to climb, which gives the current service structure a big challenge. Analysis of large datasets based on on-demand queries to the relational database management system (RDBMS), i.e. Oracle, can be problematic, and have a significant effect on the database's performance. Consequently, We have investigated some new high availability technologies like messaging infrastructure, specifically ActiveMQ, and key-value stores. The advantages of key value store technology are that they are distributed and have high scalability; also their write performances are usually much better than RDBMS, all of which are very useful for the Tracer service. Indexes and distributed counters have been also tested to improve...

  9. Real-time configuration changes of the ATLAS High Level Trigger

    CERN Document Server

    Winklmeier, F

    2010-01-01

    The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage trigger and event filter running on a farm of commodity PC hardware. Currently the system consists of about 850 processing nodes and will be extended incrementally following the expected increase in luminosity of the LHC to about 2000 nodes. The event selection within the HLT applications is carried out by specialized reconstruction algorithms. The selection can be controlled via properties that are stored in a central database and are retrieved at the startup of the HLT processes, which then usually run continuously for many hours. To be able to respond to changes in the LHC beam conditions, it is essential that the algorithms can be re-configured without disrupting data taking while ensuring a consistent and reproducible configuration across the entire HLT farm. The technique...

  10. Supporting ATLAS

    CERN Multimedia

    maximilien brice

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator.

  11. The ATLAS Electron and Photon Trigger

    CERN Document Server

    Jones, Samuel David; The ATLAS collaboration

    2017-01-01

    Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for signal selection in a wide variety of ATLAS physics analyses to study Standard Model processes and to search for new phenomena. Final states including leptons and photons had, for example, an important role in the discovery and measurement of the Higgs boson. Dedicated triggers are also used to collect data for calibration, efficiency and fake rate measurements. The ATLAS trigger system is divided in a hardware-based Level-1 trigger and a software-based high-level trigger, both of which were upgraded during the LHC shutdown in preparation for Run-2 operation. To cope with the increasing luminosity and more challenging pile-up conditions at a center-of-mass energy of 13 TeV, the trigger selections at each level are optimized to control the rates and keep efficiencies high. To achieve this goal multivariate analysis techniques are used. The ATLAS electron and photon triggers and their performance with Run 2 dat...

  12. The ATLAS Electron and Photon Trigger

    CERN Document Server

    Jones, Samuel David; The ATLAS collaboration

    2018-01-01

    Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for signal selection in a wide variety of ATLAS physics analyses to study Standard Model processes and to search for new phenomena. Final states including leptons and photons had, for example, an important role in the discovery and measurement of the Higgs boson. Dedicated triggers are also used to collect data for calibration, efficiency and fake rate measurements. The ATLAS trigger system is divided in a hardware-based Level-1 trigger and a software-based high-level trigger, both of which were upgraded during the LHC shutdown in preparation for Run-2 operation. To cope with the increasing luminosity and more challenging pile-up conditions at a center-of-mass energy of 13 TeV, the trigger selections at each level are optimized to control the rates and keep efficiencies high. To achieve this goal multivariate analysis techniques are used. The ATLAS electron and photon triggers and their performance with Run 2 dat...

  13. Making proteomics data accessible and reusable: current state of proteomics databases and repositories.

    Science.gov (United States)

    Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-03-01

    Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. © 2014 The Authors. PROTEOMICS published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Initial Measurements On Pixel Detector Modules For The ATLAS Upgrades

    CERN Document Server

    Gallrapp, C; The ATLAS collaboration

    2011-01-01

    Sophisticated conditions in terms of peak and integrated luminosity in the Large Hadron Collider (LHC) will raise the ATLAS Pixel detector to its performance limits. Silicon planar, silicon 3D and diamond pixel sensors are three possible sensor technologies which could be implemented in the upcoming pixel detector upgrades of the ATLAS experiment. Measurements of the IV-behavior and measurements with radioactive Americium-241 and Strontium-90 are used to characterize the sensor properties and to understand the interaction between the ATLAS FE-I4 front-end chip and the sensor. Comparisons of results from before and after irradiation, which give a first impression on the charge collection properties of the different sensor technologies are presented.

  15. CBS Genome Atlas Database: a dynamic storage for bioinformatic results and sequence data

    DEFF Research Database (Denmark)

    Hallin, Peter Fischer; Ussery, David

    2004-01-01

    , these results counts to more than 220 pieces of information. The backbone of this solution consists of a program package written in Perl, which enables administrators to synchronize and update the database content. The MySQL database has been connected to the CBS web-server via PHP4, to present a dynamic web...... and frequent addition of new models are factors that require a dynamic database layout. Using basic tools like the GNU Make system, csh, Perl and MySQL, we have created a flexible database environment for storing and maintaining such results for a collection of complete microbial genomes. Currently...... content for users outside the center. This solution is tightly fitted to existing server infrastructure and the solutions proposed here can perhaps serve as a template for other research groups to solve database issues....

  16. Report to users of Atlas

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1996-06-01

    This report contains the following topics: Status of the ATLAS Accelerator; Highlights of Recent Research at ATLAS; Program Advisory Committee; ATLAS User Group Executive Committee; FMA Information Available On The World Wide Web; Conference on Nuclear Structure at the Limits; and Workshop on Experiments with Gammasphere at ATLAS

  17. High resolution heat atlases for demand and supply mapping

    Directory of Open Access Journals (Sweden)

    Bernd Möller

    2014-02-01

    Full Text Available Significant reductions of heat demand, low-carbon and renewable energy sources, and district heating are key elements in 100% renewable energy systems. Appraisal of district heating along with energy efficient buildings and individual heat supply requires a geographical representation of heat demand, energy efficiency and energy supply. The present paper describes a Heat Atlas built around a spatial database using geographical information systems (GIS. The present atlas allows for per-building calculations of potentials and costs of energy savings, connectivity to existing district heat, and current heat supply and demand. For the entire building mass a conclusive link is established between the built environment and its heat supply. The expansion of district heating; the interconnection of distributed district heating systems; or the question whether to invest in ultra-efficient buildings with individual supply, or in collective heating using renewable energy for heating the current building stock, can be based on improved data.

  18. Stroke atlas: a 3D interactive tool correlating cerebrovascular pathology with underlying neuroanatomy and resulting neurological deficits.

    Science.gov (United States)

    Nowinski, W L; Chua, B C

    2013-02-01

    Understanding stroke-related pathology with underlying neuroanatomy and resulting neurological deficits is critical in education and clinical practice. Moreover, communicating a stroke situation to a patient/family is difficult because of complicated neuroanatomy and pathology. For this purpose, we created a stroke atlas. The atlas correlates localized cerebrovascular pathology with both the resulting disorder and surrounding neuroanatomy. It also provides 3D display both of labeled pathology and freely composed neuroanatomy. Disorders are described in terms of resulting signs, symptoms and syndromes, and they have been compiled for ischemic stroke, hemorrhagic stroke, and cerebral aneurysms. Neuroanatomy, subdivided into 2,000 components including 1,300 vessels, contains cerebrum, cerebellum, brainstem, spinal cord, white matter, deep grey nuclei, arteries, veins, dural sinuses, cranial nerves and tracts. A computer application was developed comprising: 1) anatomy browser with the normal brain atlas (created earlier); 2) simulator of infarcts/hematomas/aneurysms/stenoses; 3) tools to label pathology; 4) cerebrovascular pathology database with lesions and disorders, and resulting signs, symptoms and/or syndromes. The pathology database is populated with 70 lesions compiled from textbooks. The initial view of each pathological site is preset in terms of lesion location, size, surrounding surface and sectional neuroanatomy, and lesion and neuroanatomy labeling. The atlas is useful for medical students, residents, nurses, general practitioners, and stroke clinicians, neuroradiologists and neurologists. It may serve as an aid in patient-doctor communication helping a stroke clinician explain the situation to a patient/family. It also enables a layman to become familiarized with normal brain anatomy and understand what happens in stroke.

  19. Heavy Ion Physics Prospects with the ATLAS Detector at the LHC

    CERN Document Server

    Grau, N

    2008-01-01

    The next great energy frontier in Relativistic Heavy Ion Collisions is quickly approaching with the completion of the Large Hadron Collider and the ATLAS experiment is poised to make important contributions in understanding QCD matter at extreme conditions. While designed for high-pT measurements in high-energy p+p collisions, the detector is well suited to study many aspects of heavy ion collisions from bulk phenomena to high-pT and heavy flavor physics. With its large and finely segmented electromagnetic and hadronic calorimeters, the ATLAS detector excels in measurements of photons and jets, observables of great interest at the LHC. In this talk, we highlight the performance of the ATLAS detector for Pb+Pb collisions at the LHC with special emphasis on a key feature of the ATLAS physics program: jet and direct photon measurements.

  20. An integrated overview of metadata in ATLAS

    International Nuclear Information System (INIS)

    Gallas, E J; Malon, D; Hawkings, R J; Albrand, S; Torrence, E

    2010-01-01

    Metadata (data about data) arise in many contexts, from many diverse sources, and at many levels in ATLAS. Familiar examples include run-level, luminosity-block-level, and event-level metadata, and, related to processing and organization, dataset-level and file-level metadata, but these categories are neither exhaustive nor orthogonal. Some metadata are known a priori, in advance of data taking or simulation; other metadata are known only after processing, and occasionally, quite late (e.g., detector status or quality updates that may appear after initial reconstruction is complete). Metadata that may seem relevant only internally to the distributed computing infrastructure under ordinary conditions may become relevant to physics analysis under error conditions ('What can I discover about data I failed to process?'). This talk provides an overview of metadata and metadata handling in ATLAS, and describes ongoing work to deliver integrated metadata services in support of physics analysis.

  1. Preliminary Analysis Using Multi-atlas Labeling Algorithms for Tracing Longitudinal Change

    Directory of Open Access Journals (Sweden)

    Eun Young eKim

    2015-07-01

    Full Text Available Multicenter longitudinal neuroimaging has great potential to provide efficient and consistent biomarkers for research of neurodegenerative diseases and aging. In rare disease studies it is of primary importance to have a reliable tool that performs consistently for data from many different collection sites to increase study power. A multi-atlas labeling algorithm is a powerful brain image segmentation approach that is becoming increasingly popular in image processing. The present study examined the performance of multi-atlas labeling tools for subcortical identification using two types of in-vivo image database: Traveling Human Phantom and PREDICT-HD. We compared the accuracy (Dice Similarity Coefficient; DSC and intraclass correlation; ICC, multicenter reliability (Coefficient of Variance; CV, and longitudinal reliability (volume trajectory smoothness and Akaike Information Criterion; AIC of three automated segmentation approaches: two multi-atlas labeling tools, MABMIS and MALF, and a machine-learning-based tool, BRAINSCut. In general, MALF showed the best performance (higher DSC, ICC, lower CV, AIC, and smoother trajectory with a couple of exceptions. First, the results of accumben, where BRAINSCut showed higher reliability, were still premature to discuss their reliability levels since their validity is still in doubt (DSC<0.7, ICC < 0.7. For caudate, BRAINSCut presented slightly better accuracy while MALF showed significantly smoother longitudinal trajectory. We discuss advantages and limitations of these performance variations and conclude that improved segmentation quality can be achieved using multi-atlas labeling methods. While multi-atlas labeling methods are likely to help improve overall segmentation quality, caution has to be taken when one chooses an approach, as our results suggest that segmentation outcome can vary depending on research interest.

  2. The ATLAS Analysis Model

    CERN Multimedia

    Amir Farbin

    The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...

  3. Silicon strip detectors for the ATLAS HL-LHC upgrade

    CERN Document Server

    Gonzalez Sevilla, S; The ATLAS collaboration

    2011-01-01

    The LHC upgrade is foreseen to increase the ATLAS design luminosity by a factor ten, implying the need to build a new tracker suited to the harsh HL-LHC conditions in terms of particle rates and radiation doses. In order to cope with the increase in pile-up backgrounds at the higher luminosity, an all silicon detector is being designed. To successfully face the increased radiation dose, a new generation of extremely radiation hard silicon detectors is being designed. We give an overview of the ATLAS tracker upgrade project, in particular focusing on the crucial innermost silicon strip layers. Results from a wide range of irradiated silicon detectors for the strip region of the future ATLAS tracker are presented. Layout concepts for lightweight yet mechanically very rigid detector modules with high service integration are shown.

  4. A quantitative magnetic resonance histology atlas of postnatal rat brain development with regional estimates of growth and variability.

    Science.gov (United States)

    Calabrese, Evan; Badea, Alexandra; Watson, Charles; Johnson, G Allan

    2013-05-01

    There has been growing interest in the role of postnatal brain development in the etiology of several neurologic diseases. The rat has long been recognized as a powerful model system for studying neuropathology and the safety of pharmacologic treatments. However, the complex spatiotemporal changes that occur during rat neurodevelopment remain to be elucidated. This work establishes the first magnetic resonance histology (MRH) atlas of the developing rat brain, with an emphasis on quantitation. The atlas comprises five specimens at each of nine time points, imaged with eight distinct MR contrasts and segmented into 26 developmentally defined brain regions. The atlas was used to establish a timeline of morphometric changes and variability throughout neurodevelopment and represents a quantitative database of rat neurodevelopment for characterizing rat models of human neurologic disease. Published by Elsevier Inc.

  5. Functional tests of a prototype for the CMS-ATLAS common non-event data handling framework

    CERN Document Server

    Formica, Andrea; The ATLAS collaboration

    2016-01-01

    Since the 2014 the experiments ATLAS and CMS have started to share a common vision for the Condition Database infrastructure required for the forthcoming LHC runs. The large commonality in the use cases to be satisfied has allowed to agree to an overall design solution which could meet the requirements for both experiments. A first prototype implementing these solutions has been completed in 2015 and made available to both the experiments. The prototype is based on a web service implementing a REST api with a set of functions for the management of conditions data. The objects which constitute the elements of the data model are seen as resources on which CRUD operations can be performed via standard HTTP methods. The choice to insert a REST api in the architecture has several advantages: 1) the conditions data are exchanged in a neutral format ( JSON or XML), allowing to be processed by different technologies in different frameworks. 2) the client is agnostic with respect to the underlying technology adopted f...

  6. Event filter monitoring with the ATLAS tile calorimeter

    CERN Document Server

    Fiorini, L

    2008-01-01

    The ATLAS Tile Calorimeter detector is presently involved in an intense phase of subsystems integration and commissioning with muons of cosmic origin. Various monitoring programs have been developed at different levels of the data flow to tune the set-up of the detector running conditions and to provide a fast and reliable assessment of the data quality already during data taking. This paper focuses on the monitoring system integrated in the highest level of the ATLAS trigger system, the Event Filter, and its deployment during the Tile Calorimeter commissioning with cosmic ray muons. The key feature of Event Filter monitoring is the capability of performing detector and data quality control on complete physics events at the trigger level, hence before events are stored on disk. In ATLAS' online data flow, this is the only monitoring system capable of giving a comprehensive event quality feedback.

  7. Recent Improvements in the ATLAS PanDA Pilot

    International Nuclear Information System (INIS)

    Nilsson, P; De, K; Bejar, J Caballero; Maeno, T; Potekhin, M; Wenaus, T; Compostella, G; Contreras, C; Dos Santos, T

    2012-01-01

    The Production and Distributed Analysis system (PanDA) in the ATLAS experiment uses pilots to execute submitted jobs on the worker nodes. The pilots are designed to deal with different runtime conditions and failure scenarios, and support many storage systems. This talk will give a brief overview of the PanDA pilot system and will present major features and recent improvements including CernVM File System integration, the job retry mechanism, advanced job monitoring including JEM technology, and validation of new pilot code using the HammerCloud stress-testing system. PanDA is used for all ATLAS distributed production and is the primary system for distributed analysis. It is currently used at over 130 sites worldwide. We analyze the performance of the pilot system in processing LHC data on the OSG, EGI and Nordugrid infrastructures used by ATLAS, and describe plans for its further evolution.

  8. ATLAS Distributed Computing

    CERN Document Server

    Schovancova, J; The ATLAS collaboration

    2011-01-01

    The poster details the different aspects of the ATLAS Distributed Computing experience after the first year of LHC data taking. We describe the performance of the ATLAS distributed computing system and the lessons learned during the 2010 run, pointing out parts of the system which were in a good shape, and also spotting areas which required improvements. Improvements ranged from hardware upgrade on the ATLAS Tier-0 computing pools to improve data distribution rates, tuning of FTS channels between CERN and Tier-1s, and studying data access patterns for Grid analysis to improve the global processing rate. We show recent software development driven by operational needs with emphasis on data management and job execution in the ATLAS production system.

  9. ATLAS Review Office

    CERN Multimedia

    Szeless, B

    The ATLAS internal reviews, be it the mandatory Production Readiness Reviews, the now newly installed Production Advancement Reviews, or the more and more requested different Design Reviews, have become a part of our ATLAS culture over the past years. The Activity Systems Status Overviews are, for the time being, a one in time event and should be held for each system as soon as possible to have some meaning. There seems to a consensus that the reviews have become a useful project tool for the ATLAS management but even more so for the sub-systems themselves making achievements as well as possible shortcomings visible. One other recognized byproduct is the increasing cross talk between the systems, a very important ingredient to make profit all the systems from the large collective knowledge we dispose of in ATLAS. In the last two months, the first two PARs were organized for the MDT End Caps and the TRT Barrel Modules, both part of the US contribution to the ATLAS Project. Furthermore several different design...

  10. Toward a public analysis database for LHC new physics searches using M ADA NALYSIS 5

    Science.gov (United States)

    Dumont, B.; Fuks, B.; Kraml, S.; Bein, S.; Chalons, G.; Conte, E.; Kulkarni, S.; Sengupta, D.; Wymant, C.

    2015-02-01

    We present the implementation, in the MadAnalysis 5 framework, of several ATLAS and CMS searches for supersymmetry in data recorded during the first run of the LHC. We provide extensive details on the validation of our implementations and propose to create a public analysis database within this framework.

  11. Berliner Philarmoniker ATLAS visit

    CERN Multimedia

    ATLAS Collaboration

    2017-01-01

    The Berliner Philarmoniker in on tour through Europe. They stopped on June 27th in Geneva, for a concert at the Victoria Hall. An ATLAS visit was organised the morning after, lead by the ATLAS spokesperson Karl Jakobs (welcome and overview talk) and two ATLAS guides (AVC visit and 3D movie).

  12. Functional tests of a prototype for the CMS-ATLAS common non-event data handling framework

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00366910; The ATLAS collaboration; Formica, Andrea

    2017-01-01

    Since 2014 the ATLAS and CMS experiments share a common vision on the database infrastructure for the handling of the non-event data in forthcoming LHC runs. The wide commonality in the use cases has allowed to agree on a common overall design solution that is meeting the requirements of both experiments. A first prototype has been completed in 2016 and has been made available to both experiments. The prototype is based on a web service implementing a REST api with a set of functions for the management of conditions data. In this contribution, we describe this prototype architecture and the tests that have been performed within the CMS computing infrastructure, with the aim of validating the support of the main use cases and of suggesting future improvements.

  13. Initial Measurements on Pixel Detector Modules for the ATLAS Upgrades

    CERN Document Server

    Gallrapp, C; The ATLAS collaboration

    2011-01-01

    Delicate conditions in terms of peak and integrated luminosity in the Large Hadron Collider (LHC) will raise the ATLAS Pixel Detector to its performance limits. Silicon planar, silicon 3D and diamond pixel sensors are three possible sensor technologies which could be implemented in the upcoming Pixel Detector upgrades of the ATLAS experiment. Measurements of the IV-behavior and measurements with radioactive Americium-241 and Strontium-90 are used to characterize the sensor properties and to understand the interaction between the ATLAS FE-I4 front-end chip and the sensor. Comparisons of results from before and after irradiation for silicon planar and 3D pixel sensors, which give a first impression on the charge collection properties of the different sensor technologies, are presented.

  14. Evolution of the open-source data management system Rucio for LHC Run-3 and beyond ATLAS

    CERN Document Server

    Barisits, Martin-Stefan; The ATLAS collaboration

    2018-01-01

    Rucio, the distributed data management system of the ATLAS collaboration already manages more than 330 Petabytes of physics data on the grid. Rucio has seen incremental improvements throughout LHC Run-2 and is currently being prepared for the HL-LHC era of the experiment. Next to these improvements the system is currently evolving into a full-scale generic data management system for application beyond ATLAS, or even beyond high energy physics. This contribution focuses on the development roadmap of Rucio for LHC Run-3, such as, event level data management, generic meta-data support, and increased usage of networks and tapes. At the same time Rucio is evolving beyond the original ATLAS use-case. This includes authentication beyond the WLCG ecosystem, generic database compatibility, deployment and packaging of the software stack in containers and a project paradigm shift to a full-scale open source project.

  15. Software Validation in ATLAS

    International Nuclear Information System (INIS)

    Hodgkinson, Mark; Seuster, Rolf; Simmons, Brinick; Sherwood, Peter; Rousseau, David

    2012-01-01

    The ATLAS collaboration operates an extensive set of protocols to validate the quality of the offline software in a timely manner. This is essential in order to process the large amounts of data being collected by the ATLAS detector in 2011 without complications on the offline software side. We will discuss a number of different strategies used to validate the ATLAS offline software; running the ATLAS framework software, Athena, in a variety of configurations daily on each nightly build via the ATLAS Nightly System (ATN) and Run Time Tester (RTT) systems; the monitoring of these tests and checking the compilation of the software via distributed teams of rotating shifters; monitoring of and follow up on bug reports by the shifter teams and periodic software cleaning weeks to improve the quality of the offline software further.

  16. ATLAS Open Data project

    CERN Document Server

    The ATLAS collaboration

    2018-01-01

    The current ATLAS model of Open Access to recorded and simulated data offers the opportunity to access datasets with a focus on education, training and outreach. This mandate supports the creation of platforms, projects, software, and educational products used all over the planet. We describe the overall status of ATLAS Open Data (http://opendata.atlas.cern) activities, from core ATLAS activities and releases to individual and group efforts, as well as educational programs, and final web or software-based (and hard-copy) products that have been produced or are under development. The relatively large number and heterogeneous use cases currently documented is driving an upcoming release of more data and resources for the ATLAS Community and anyone interested to explore the world of experimental particle physics and the computer sciences through data analysis.

  17. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, A; The ATLAS collaboration; Klimentov, A; Senchenko, A

    2012-01-01

    The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  18. Cartea de Colorat a Experimentului ATLAS - ATLAS Experiment Colouring Book in Romanian

    CERN Multimedia

    Anthony, Katarina

    2018-01-01

    Language: Romanian - The ATLAS Experiment Colouring Book is a free-to-download educational book, ideal for kids aged 5-9. It aims to introduce children to the field of High-Energy Physics, as well as the work being carried out by the ATLAS Collaboration. Limba: Română - Cartea de Colorat a Experimentului ATLAS este o carte educativă gratuită, ideală pentru copiii cu vârsta cuprinsă între 5-9 ani. Scopul său este de a introduce copii în domeniul fizicii de înaltă energie, precum și activitatea desfășurată de colaborarea ATLAS.

  19. Calculation Sheet for the Basic Design of the ATLAS Fluid System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hyun Sik; Moon, S. K.; Yun, B. J.; Kwon, T. S.; Choi, K. Y.; Cho, S.; Park, C. K.; Lee, S. J.; Kim, Y. S.; Song, C. H.; Baek, W. P.; Hong, S. D

    2007-03-15

    The basic design of an integral effect test loop for pressurized water reactors (PWRs), the ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been carried out by Thermal-Hydraulics Safety Research Team in Korea Atomic Energy Research Institute (KAERI). The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400, and is scaled for full pressure and temperature conditions. This report includes calculation sheets for the basic design of ATLAS fluid systems, which are consisted of a reactor pressure vessel with core simulator, the primary loop piping, a pressurizer, reactor coolant pumps, steam generators, the secondary system, the safety system, the auxiliary system, and the heat loss compensation system. The present calculation sheets will be used to help understanding the basic design of the ATLAS fluid system and its based scaling methodology.

  20. Calculation Sheet for the Basic Design of the ATLAS Fluid System

    International Nuclear Information System (INIS)

    Park, Hyun Sik; Moon, S. K.; Yun, B. J.; Kwon, T. S.; Choi, K. Y.; Cho, S.; Park, C. K.; Lee, S. J.; Kim, Y. S.; Song, C. H.; Baek, W. P.; Hong, S. D.

    2007-03-01

    The basic design of an integral effect test loop for pressurized water reactors (PWRs), the ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been carried out by Thermal-Hydraulics Safety Research Team in Korea Atomic Energy Research Institute (KAERI). The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400, and is scaled for full pressure and temperature conditions. This report includes calculation sheets for the basic design of ATLAS fluid systems, which are consisted of a reactor pressure vessel with core simulator, the primary loop piping, a pressurizer, reactor coolant pumps, steam generators, the secondary system, the safety system, the auxiliary system, and the heat loss compensation system. The present calculation sheets will be used to help understanding the basic design of the ATLAS fluid system and its based scaling methodology

  1. Visualization of historical data for the ATLAS detector controls - DDV

    Science.gov (United States)

    Maciejewski, J.; Schlenker, S.

    2017-10-01

    The ATLAS experiment is one of four detectors located on the Large Hardon Collider (LHC) based at CERN. Its detector control system (DCS) stores the slow control data acquired within the back-end of distributed WinCC OA applications, which enables the data to be retrieved for future analysis, debugging and detector development in an Oracle relational database. The ATLAS DCS Data Viewer (DDV) is a client-server application providing access to the historical data outside of the experiment network. The server builds optimized SQL queries, retrieves the data from the database and serves it to the clients via HTTP connections. The server also implements protection methods to prevent malicious use of the database. The client is an AJAX-type web application based on the Vaadin (framework build around the Google Web Toolkit (GWT)) which gives users the possibility to access the data with ease. The DCS metadata can be selected using a column-tree navigation or a search engine supporting regular expressions. The data is visualized by a selection of output modules such as a java script value-over time plots or a lazy loading table widget. Additional plugins give the users the possibility to retrieve the data in ROOT format or as an ASCII file. Control system alarms can also be visualized in a dedicated table if necessary. Python mock-up scripts can be generated by the client, allowing the user to query the pythonic DDV server directly, such that the users can embed the scripts into more complex analysis programs. Users are also able to store searches and output configurations as XML on the server to share with others via URL or to embed in HTML.

  2. An Open-Source Label Atlas Correction Tool and Preliminary Results on Huntingtons Disease Whole-Brain MRI Atlases.

    Science.gov (United States)

    Forbes, Jessica L; Kim, Regina E Y; Paulsen, Jane S; Johnson, Hans J

    2016-01-01

    The creation of high-quality medical imaging reference atlas datasets with consistent dense anatomical region labels is a challenging task. Reference atlases have many uses in medical image applications and are essential components of atlas-based segmentation tools commonly used for producing personalized anatomical measurements for individual subjects. The process of manual identification of anatomical regions by experts is regarded as a so-called gold standard; however, it is usually impractical because of the labor-intensive costs. Further, as the number of regions of interest increases, these manually created atlases often contain many small inconsistently labeled or disconnected regions that need to be identified and corrected. This project proposes an efficient process to drastically reduce the time necessary for manual revision in order to improve atlas label quality. We introduce the LabelAtlasEditor tool, a SimpleITK-based open-source label atlas correction tool distributed within the image visualization software 3D Slicer. LabelAtlasEditor incorporates several 3D Slicer widgets into one consistent interface and provides label-specific correction tools, allowing for rapid identification, navigation, and modification of the small, disconnected erroneous labels within an atlas. The technical details for the implementation and performance of LabelAtlasEditor are demonstrated using an application of improving a set of 20 Huntingtons Disease-specific multi-modal brain atlases. Additionally, we present the advantages and limitations of automatic atlas correction. After the correction of atlas inconsistencies and small, disconnected regions, the number of unidentified voxels for each dataset was reduced on average by 68.48%.

  3. Big Data Tools as Applied to ATLAS Event Data

    Science.gov (United States)

    Vukotic, I.; Gardner, R. W.; Bryant, L. A.

    2017-10-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Logfiles, database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and associated analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data. Such modes would simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning environments and tools like Spark, Jupyter, R, SciPy, Caffe, TensorFlow, etc. Machine learning challenges such as the Higgs Boson Machine Learning Challenge, the Tracking challenge, Event viewers (VP1, ATLANTIS, ATLASrift), and still to be developed educational and outreach tools would be able to access the data through a simple REST API. In this preliminary investigation we focus on derived xAOD data sets. These are much smaller than the primary xAODs having containers, variables, and events of interest to a particular analysis. Being encouraged with the performance of Elasticsearch for the ADC analytics platform, we developed an algorithm for indexing derived xAOD event data. We have made an appropriate document mapping and have imported a full set of standard model W/Z datasets. We compare the disk space efficiency of this approach to that of standard ROOT files, the performance in simple cut flow type of data analysis, and will present preliminary results on its scaling

  4. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, Alexey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-01-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  5. Computational and mathematical methods in brain atlasing.

    Science.gov (United States)

    Nowinski, Wieslaw L

    2017-12-01

    Brain atlases have a wide range of use from education to research to clinical applications. Mathematical methods as well as computational methods and tools play a major role in the process of brain atlas building and developing atlas-based applications. Computational methods and tools cover three areas: dedicated editors for brain model creation, brain navigators supporting multiple platforms, and atlas-assisted specific applications. Mathematical methods in atlas building and developing atlas-aided applications deal with problems in image segmentation, geometric body modelling, physical modelling, atlas-to-scan registration, visualisation, interaction and virtual reality. Here I overview computational and mathematical methods in atlas building and developing atlas-assisted applications, and share my contribution to and experience in this field.

  6. Silicon strip detectors for the ATLAS upgrade

    CERN Document Server

    Gonzalez Sevilla, S; The ATLAS collaboration

    2011-01-01

    The Large Hadron Collider at CERN will extend its current physics program by increasing the peak luminosity by one order of magnitude. For ATLAS, one of the two general-purpose experiments of the LHC, an upgrade scenario will imply the complete replacement of its internal tracker due to the harsh conditions in terms of particle rates and radiation doses. New radiation-hard prototype n-in-p silicon sensors have been produced for the short-strip region of the future ATLAS tracker. The sensors have been irradiated up to the fluences expected in the high-luminous LHC collider. This paper summarizes recent results on the performance of the irradiated n-in-p detectors.

  7. ATLAS Point-1 System Administration Group

    CERN Multimedia

    Marc Dobson

    2007-01-01

    Hello, my name is Joe Blog and I am about to go on shift at ATLAS. When I enter the control room shown below with my CERN ID card, I go to the subsystem desk for which I am responsible. This is the first shift of the run period and there is a login window displayed on the screens. I just need to hit return and the control room desktop is started. Before I can do anything I must give my credentials in the shifter window which is then synchronised with the shift plan. After that I have access to all the allowed commands and can start preparing for the run. In order not to forget any steps I consult the documentation on how to prepare for a run on the Point-1 web. I can also check what the general status is for the ATLAS online computing farm, the sub-detectors and the LHC by using the utilities provided. ATLAS Control Room. The situation described is made up but the conditions are real. But the control room that the shifters and general public see is only the tip of the iceberg. Behind these tools lie the...

  8. DCS data viewer, an application that accesses ATLAS DCS historical data

    International Nuclear Information System (INIS)

    Tsarouchas, C; Schlenker, S; Dimitrov, G; Jahn, G

    2014-01-01

    The ATLAS experiment at CERN is one of the four Large Hadron Collider experiments. The Detector Control System (DCS) of ATLAS is responsible for the supervision of the detector equipment, the reading of operational parameters, the propagation of the alarms and the archiving of important operational data in a relational database (DB). DCS Data Viewer (DDV) is an application that provides access to the ATLAS DCS historical data through a web interface. Its design is structured using a client-server architecture. The pythonic server connects to the DB and fetches the data by using optimized SQL requests. It communicates with the outside world, by accepting HTTP requests and it can be used stand alone. The client is an AJAX (Asynchronous JavaScript and XML) interactive web application developed under the Google Web Toolkit (GWT) framework. Its web interface is user friendly, platform and browser independent. The selection of metadata is done via a column-tree view or with a powerful search engine. The final visualization of the data is done using java applets or java script applications as plugins. The default output is a value-over-time chart, but other types of outputs like tables, ascii or ROOT files are supported too. Excessive access or malicious use of the database is prevented by a dedicated protection mechanism, allowing the exposure of the tool to hundreds of inexperienced users. The current configuration of the client and of the outputs can be saved in an XML file. Protection against web security attacks is foreseen and authentication constrains have been taken into account, allowing the exposure of the tool to hundreds of users world wide. Due to its flexible interface and its generic and modular approach, DDV could be easily used for other experiment control systems.

  9. Dcs Data Viewer, an Application that Accesses ATLAS DCS Historical Data

    Science.gov (United States)

    Tsarouchas, C.; Schlenker, S.; Dimitrov, G.; Jahn, G.

    2014-06-01

    The ATLAS experiment at CERN is one of the four Large Hadron Collider experiments. The Detector Control System (DCS) of ATLAS is responsible for the supervision of the detector equipment, the reading of operational parameters, the propagation of the alarms and the archiving of important operational data in a relational database (DB). DCS Data Viewer (DDV) is an application that provides access to the ATLAS DCS historical data through a web interface. Its design is structured using a client-server architecture. The pythonic server connects to the DB and fetches the data by using optimized SQL requests. It communicates with the outside world, by accepting HTTP requests and it can be used stand alone. The client is an AJAX (Asynchronous JavaScript and XML) interactive web application developed under the Google Web Toolkit (GWT) framework. Its web interface is user friendly, platform and browser independent. The selection of metadata is done via a column-tree view or with a powerful search engine. The final visualization of the data is done using java applets or java script applications as plugins. The default output is a value-over-time chart, but other types of outputs like tables, ascii or ROOT files are supported too. Excessive access or malicious use of the database is prevented by a dedicated protection mechanism, allowing the exposure of the tool to hundreds of inexperienced users. The current configuration of the client and of the outputs can be saved in an XML file. Protection against web security attacks is foreseen and authentication constrains have been taken into account, allowing the exposure of the tool to hundreds of users world wide. Due to its flexible interface and its generic and modular approach, DDV could be easily used for other experiment control systems.

  10. Wind Atlas for Egypt

    DEFF Research Database (Denmark)

    The results of a comprehensive, 8-year wind resource assessment programme in Egypt are presented. The objective has been to provide reliable and accurate wind atlas data sets for evaluating the potential wind power output from large electricityproducing wind turbine installations. The regional wind...... climates of Egypt have been determined by two independent methods: a traditional wind atlas based on observations from more than 30 stations all over Egypt, and a numerical wind atlas based on long-term reanalysis data and a mesoscale model (KAMM). The mean absolute error comparing the two methods is about...... 10% for two large-scale KAMM domains covering all of Egypt, and typically about 5% for several smaller-scale regional domains. The numerical wind atlas covers all of Egypt, whereas the meteorological stations are concentrated in six regions. The Wind Atlas for Egypt represents a significant step...

  11. Wind Atlas for Egypt

    DEFF Research Database (Denmark)

    Mortensen, Niels Gylling; Said Said, Usama; Badger, Jake

    2006-01-01

    The results of a comprehensive, 8-year wind resource assessment programme in Egypt are presented. The objective has been to provide reliable and accurate wind atlas data sets for evaluating the potential wind power output from large electricityproducing wind turbine installations. The regional wind...... climates of Egypt have been determined by two independent methods: a traditional wind atlas based on observations from more than 30 stations all over Egypt, and a numerical wind atlas based on long-term reanalysis data and a mesoscale model (KAMM). The mean absolute error comparing the two methods is about...... 10% for two large-scale KAMM domains covering all of Egypt, and typically about 5% for several smaller-scale regional domains. The numerical wind atlas covers all of Egypt, whereas the meteorological stations are concentrated in six regions. The Wind Atlas for Egypt represents a significant step...

  12. Recent ATLAS Articles on WLAP

    CERN Multimedia

    Goldfarb, S.

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: June ATLAS Plenary Meeting Tutorial on Physics EDM and Tools (June) Freiburg Overview Week Ketevi Assamagan's Tutorial on Analysis Tools Click here to browse WLAP for all ATLAS lectures.

  13. Estimate of the neutron fields in ATLAS based on ATLAS-MPX detectors data

    International Nuclear Information System (INIS)

    Bouchami, J; Dallaire, F; Gutierrez, A; Idarraga, J; Leroy, C; Picard, S; Scallon, O; Kral, V; PospIsil, S; Solc, J; Suk, M; Turecek, D; Vykydal, Z; Zemlieka, J

    2011-01-01

    The ATLAS-MPX detectors are based on Medipix2 silicon devices designed by CERN for the detection of different types of radiation. These detectors are covered with converting layers of 6 LiF and polyethylene (PE) to increase their sensitivity to thermal and fast neutrons, respectively. These devices allow the measurement of the composition and spectroscopic characteristics of the radiation field in ATLAS, particularly of neutrons. These detectors can operate in low or high preset energy threshold mode. The signature of particles interacting in a ATLAS-MPX detector at low threshold are clusters of adjacent pixels with different size and form depending on their type, energy and incidence angle. The classification of particles into different categories can be done using the geometrical parameters of these clusters. The Medipix analysis framework (MAFalda) - based on the ROOT application - allows the recognition of particle tracks left in ATLAS-MPX devices located at various positions in the ATLAS detector and cavern. The pattern recognition obtained from the application of MAFalda was configured to distinguish the response of neutrons from other radiation. The neutron response at low threshold is characterized by clusters of adjoining pixels (heavy tracks and heavy blobs) left by protons and heavy ions resulting from neutron interactions in the converting layers of the ATLAS-MPX devices. The neutron detection efficiency of ATLAS-MPX devices has been determined by the exposure of two detectors of reference to radionuclide sources of neutrons ( 252 Cf and 241 AmBe). With these results, an estimate of the neutrons fields produced at the devices locations during ATLAS operation was done.

  14. ATLAS ITk and new pixel sensors technologies

    CERN Document Server

    Gaudiello, A

    2016-01-01

    During the 2023–2024 shutdown, the Large Hadron Collider (LHC) will be upgraded to reach an instantaneous luminosity up to 7×10$^{34}$ cm$^{−2}$s$^{−1}$. This upgrade of the accelerator is called High-Luminosity LHC (HL-LHC). The ATLAS detector will be changed to meet the challenges of HL-LHC: an average of 200 pile-up events in every bunch crossing, and an integrated luminosity of 3000 fb $^{−1}$ over ten years. The HL-LHC luminosity conditions are too extreme for the current silicon (pixel and strip) detectors and straw tube transition radiation tracker (TRT) of the current ATLAS tracking system. Therefore the ATLAS inner tracker is being completely rebuilt for data-taking and the new system is called Inner Tracker (ITk). During this upgrade the TRT will be removed in favor of an all-new all-silicon tracker composed only by strip and pixel detectors. An overview of new layouts in study will be reported and the new pixel sensor technologies in development will be explained.

  15. The Resource Manager the ATLAS Trigger and Data Acquisition System

    CERN Document Server

    Aleksandrov, Igor; The ATLAS collaboration; Lehmann Miotto, Giovanna; Soloviev, Igor

    2016-01-01

    The Resource Manager of the ATLAS Trigger and Data Acquisition system The Resource Manager is one of the core components of the Data Acquisition system of the ATLAS experiment at the LHC. The Resource Manager marshals the right for applications to access resources which may exist in multiple but limited copies, in order to avoid conflicts due to program faults or operator errors. The access to resources is managed in a manner similar to what a lock manager would do in other software systems. All the available resources and their association to software processes are described in the Data Acquisition configuration database. The Resource Manager is queried about the availability of resources every time an application needs to be started. The Resource Manager’s design is based on a client-server model, hence it consists of two components: the Resource Manager "server" application and the "client" shared library. The Resource Manager server implements all the needed functionalities, while the Resource Manager c...

  16. Distributed Data Collection For Next Generation ATLAS EventIndex Project

    CERN Document Server

    Fernandez Casani, Alvaro; The ATLAS collaboration

    2018-01-01

    The ATLAS EventIndex currently runs in production in order to build a complete catalogue of events for experiments with large amounts of data. The current approach is to index all final produced data files at CERN Tier0, and at hundreds of grid sites, with a distributed data collection architecture using Object Stores to temporary maintain the conveyed information, with references to them sent with a Messaging System. The final backend of all the indexed data is a central Hadoop infrastructure at CERN; an Oracle relational database is used for faster access to a subset of this information. In the future of ATLAS, instead of files, the event should be the atomic information unit for metadata. This motivation arises in order to accommodate future data processing and storage technologies. Files will no longer be static quantities, possibly dynamically aggregating data, and also allowing event-level granularity processing in heavily parallel computing environments. It also simplifies the handling of loss and or e...

  17. Neoproterozoic–Cambrian stratigraphic framework of the Anti-Atlas and Ouzellagh promontory (High Atlas), Morocco

    Science.gov (United States)

    Alvaro, Jose Javier; Benziane, Fouad; Thomas, Robert; Walsh, Gregory J.; Yazidi, Abdelaziz

    2014-01-01

    In the last two decades, great progress has been made in the geochronological, chrono- and chemostratigraphic control of the Neoproterozoic and Cambrian from the Anti-Atlas Ranges and the Ouzellagh promontory (High Atlas). As a result, the Neoproterozoic is lithostratigraphically subdivided into: (i) the Lkest-Taghdout Group (broadly interpreted at c. 800–690 Ma) representative of rift-to-passive margin conditions on the northern West African craton; (ii) the Iriri (c. 760–740 Ma), Bou Azzer (c. 762–697 Ma) and Saghro (c. 760?–610 Ma) groups, the overlying Anezi, Bou Salda, Dadès and Tiddiline formations localized in fault-grabens, and the Ouarzazate Supergroup (c. 615–548 Ma), which form a succession of volcanosedimentary complexes recording the onset of the Pan-African orogeny and its aftermath; and (iii) the Taroudant (the Ediacaran–Cambrian boundary lying in the Tifnout Member of the Adoudou Formation), Tata, Feijas Internes and Tabanite groups that have recorded development of the late Ediacaran–Cambrian Atlas Rift. Recent discussions of Moroccan strata to select new global GSSPs by the International Subcommissions on Ediacaran and Cambrian Stratigraphy have raised the stratigraphic interest in this region. A revised and updated stratigraphic framework is proposed here to assist the tasks of both subcommissions and to fuel future discussions focused on different geological aspects of the Neoproterozoic–Cambrian time span.

  18. Pulse simulations and heat flow measurements for the ATLAS Forward Calorimeter under high-luminosity conditions

    CERN Document Server

    AUTHOR|(SzGeCERN)758133; Zuber, Kai

    The high luminosity phase of the Large Hadron Collider at CERN is an important step for further and more detailed studies of the Standard Model of particle physics as well as searches for new physics. The necessary upgrade of the ATLAS detector is a challenging task as the increased luminosity entails many problems for the different detector parts. The liquid-argon Forward Calorimeter suffers signal-degradation effects and a high voltage drop of the supply potential under high-luminosity conditions. It is possible that the argon starts to boil due to the large energy depositions. The effect of the high-luminosity environment on the liquid-argon Forward Calorimeter has been simulated in order to investigate the level of signal degradation. The results show a curvature of the triangular pulse shape that appears prolonged when increasing the energy deposit. This effect is caused by the drop in the electric potential that produces a decrease in the electric field across the liquid-argon gap in the Forward Calorim...

  19. ATLAS@Home looks for CERN volunteers

    CERN Multimedia

    Rosaria Marraffino

    2014-01-01

    ATLAS@Home is a CERN volunteer computing project that runs simulated ATLAS events. As the project ramps up, the project team is looking for CERN volunteers to test the system before planning a bigger promotion for the public.   The ATLAS@home outreach website. ATLAS@Home is a large-scale research project that runs ATLAS experiment simulation software inside virtual machines hosted by volunteer computers. “People from all over the world offer up their computers’ idle time to run simulation programmes to help physicists extract information from the large amount of data collected by the detector,” explains Claire Adam Bourdarios of the ATLAS@Home project. “The ATLAS@Home project aims to extrapolate the Standard Model at a higher energy and explore what new physics may look like. Everything we’re currently running is preparation for next year's run.” ATLAS@Home became an official BOINC (Berkeley Open Infrastructure for Network ...

  20. The B00 model coil in the ATLAS Magnet Test Facility

    CERN Document Server

    Dudarev, A; ten Kate, H H J; Anashkin, O P; Keilin, V E; Lysenko, V V

    2001-01-01

    A 1-m size model coil has been developed to investigate the transport properties of the three aluminum-stabilized superconductors used in the ATLAS magnets. The coil, named B00, is also used for debugging the cryogenic, power and control systems of the ATLAS Magnet Test Facility. The coil comprises two double pancakes made of the barrel toroid and end-cap toroid conductors and a single pancake made of the central solenoid conductor. The pancakes are placed inside an aluminum coil casing. The coil construction and cooling conditions are quite similar to the final design of the ATLAS magnets. The B00 coil is well equipped with various sensors to measure thermal and electrodynamic properties of the conductor inside the coils. Special attention has been paid to the study of the current diffusion process and the normal zone propagation in the ATLAS conductors and windings. Special pick-up coils have been made to measure the diffusion at different currents and magnetic field values. (6 refs).

  1. Implementation of the ATLAS trigger within the ATLAS Multi­Threaded Software Framework AthenaMT

    CERN Document Server

    Wynne, Benjamin; The ATLAS collaboration

    2016-01-01

    We present an implementation of the ATLAS High Level Trigger that provides parallel execution of trigger algorithms within the ATLAS multi­threaded software framework, AthenaMT. This development will enable the ATLAS High Level Trigger to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data­taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the High Level Trigger input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that process events independently, executing algorithms sequentially in each process. AthenaMT will provide a fully multi­threaded env...

  2. Web System for Data Quality Assessment of Tile Calorimeter During the ATLAS Operation

    CERN Document Server

    Guimaraes Ferreira, F; The ATLAS collaboration; Fink Grael, F; Sivolella Gomes, A; Balabram Filho, L

    2010-01-01

    TileCal is the barrel hadronic calorimeter of the ATLAS experiment and has ~10 000 electronic channels. Supervising the detector behavior is a very important task to ensure proper operation. Collaborators perform analyzes over reconstructed data of calibration runs in order to give detailed considerations about failures and to assert the equipment status. Then, the data quality responsible provides the list of problematic channels that should not be considered for physics analysis. Since the commissioning period, our group has developed seven web systems that guide the collaborators through the data quality assessment task. Each system covers a part of the job, providing information on the latest runs, displaying status from the automatic monitoring framework, giving details about power supplies operation, presenting the generated plots and storing the validation outcomes, assisting to write logbook entries, creating and submitting the bad channels list to the conditions database and publishing the equipment ...

  3. The version control service for ATLAS data acquisition configuration filesDAQ ; configuration ; OKS ; XML

    CERN Document Server

    Soloviev, Igor; The ATLAS collaboration

    2012-01-01

    To configure data taking session the ATLAS systems and detectors store more than 160 MBytes of data acquisition related configuration information in OKS XML files. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems caused by XML syntax errors or inconsistent state of files from a point of view of the overall ATLAS configuration. It was not always possible to know who made a modification causing problems or how to go back to a previous version of the modified file. Few years ago a special service addressing these issues has been implemented and deployed on ATLAS Point-1. It excludes direct write access to XML files stored in a central database repository. Instead, for an update the files are copied into a user repository, validated after modifications and committed using a version control system. The system's callback updates the central repository. Also, it keeps track of all modifications providi...

  4. Hydrochemical Atlas of the Arctic Ocean (NODC Accession 0044630)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The present Hydrochemical Atlas of the Arctic Ocean is a description of hydrochemical conditions in the Arctic Ocean on the basis of a greater body of hydrochemical...

  5. Report to users of ATLAS

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1997-03-01

    This report covers the following topics: (1) status of the ATLAS accelerator; (2) progress in R and D towards a proposal for a National ISOL Facility; (3) highlights of recent research at ATLAS; (4) the move of gammasphere from LBNL to ANL; (5) Accelerator Target Development laboratory; (6) Program Advisory Committee; (7) ATLAS User Group Executive Committee; and (8) ATLAS user handbook available in the World Wide Web. A brief summary is given for each topic

  6. Evolution of the ReadOut System of the ATLAS experiment

    CERN Document Server

    Borga, A; The ATLAS collaboration; Joos, M; Schumacher, J; Tremblet, L; Vandelli, W; Vermeulen, J; Werner, P; Wickens, F

    2014-01-01

    The ReadOut System (ROS) is a central and essential part of the ATLAS data-acquisition system. It receives and buffers event data accepted from all sub-detectors and first-level trigger subsystems. Event data are subsequently forwarded to the High-Level Trigger system and Event Builder via a GbE-based network. The ATLAS ROS will be completely renewed in view of the demanding conditions expected during LHC Run 2 and Run 3. The new ROS will consist of roughly 100 Linux-based 2U-high rack-mounted server PCs, each equipped with 2 PCIe I/O cards and four 10GbE interfaces. The FPGA-based PCIe I/O cards, developed by the ALICE collaboration, will be configured with ATLAS-specific firmware, called RobinNP. They will provide connectivity to about 2000 point-to-point optical links conveying the ATLAS event data. This dense configuration provides an excellent test bench for studying I/O efficiency and challenges in current COTS PC architectures with non-uniform memory and I/O access paths. In this paper the requirements...

  7. Evolution of the ReadOut System of the ATLAS experiment

    CERN Document Server

    Borga, A; The ATLAS collaboration; Green, B; Kugel, A; Joos, M; Panduro Vazquez, W; Schumacher, J; Teixeira-Dias, P; Tremblet, L; Vandelli, W; Vermeulen, J; Werner, P; Wickens, F

    2014-01-01

    The ReadOut System (ROS) is a central and essential part of the ATLAS DAQ system. It receives and buffers data of events accepted by the first-level trigger from all subdetectors and first-level trigger subsystems. Event data are subsequently forwarded to the High-Level Trigger system and Event Builder via a 1 GbE-based network. The ATLAS ROS is completely renewed in view of the demanding conditions expected during LHC Run 2 and Run 3, to replace obsolete technologies and space constraints require it to be compact. The new ROS will consist of roughly 100 Linux-based 2U high rack mounted server PCs, each equipped with 2 PCIe I/O cards and two four 10 GbE interfaces. The FPGA-based PCIe I/O cards, developed by the ALICE collaboration, will be configured with ATLAS-specific firmware, the so-called RobinNP firmware. They will provide the connectivity to about 2000 optical point-to-point links conveying the ATLAS event data. This dense configuration provides an excellent test bench for studying I/O efficiency and ...

  8. Estimate of the neutron fields in ATLAS based on ATLAS-MPX detectors data

    Energy Technology Data Exchange (ETDEWEB)

    Bouchami, J; Dallaire, F; Gutierrez, A; Idarraga, J; Leroy, C; Picard, S; Scallon, O [Universite de Montreal, Montreal, Quebec H3C 3J7 (Canada); Kral, V; PospIsil, S; Solc, J; Suk, M; Turecek, D; Vykydal, Z; Zemlieka, J, E-mail: scallon@lps.umontreal.ca [Institute of Experimental and Applied Physics of the CTU in Prague, Horska 3a/22, CZ-12800 Praha2 - Albertov (Czech Republic)

    2011-01-15

    The ATLAS-MPX detectors are based on Medipix2 silicon devices designed by CERN for the detection of different types of radiation. These detectors are covered with converting layers of {sup 6}LiF and polyethylene (PE) to increase their sensitivity to thermal and fast neutrons, respectively. These devices allow the measurement of the composition and spectroscopic characteristics of the radiation field in ATLAS, particularly of neutrons. These detectors can operate in low or high preset energy threshold mode. The signature of particles interacting in a ATLAS-MPX detector at low threshold are clusters of adjacent pixels with different size and form depending on their type, energy and incidence angle. The classification of particles into different categories can be done using the geometrical parameters of these clusters. The Medipix analysis framework (MAFalda) - based on the ROOT application - allows the recognition of particle tracks left in ATLAS-MPX devices located at various positions in the ATLAS detector and cavern. The pattern recognition obtained from the application of MAFalda was configured to distinguish the response of neutrons from other radiation. The neutron response at low threshold is characterized by clusters of adjoining pixels (heavy tracks and heavy blobs) left by protons and heavy ions resulting from neutron interactions in the converting layers of the ATLAS-MPX devices. The neutron detection efficiency of ATLAS-MPX devices has been determined by the exposure of two detectors of reference to radionuclide sources of neutrons ({sup 252}Cf and {sup 241}AmBe). With these results, an estimate of the neutrons fields produced at the devices locations during ATLAS operation was done.

  9. Estimate of the neutron fields in ATLAS based on ATLAS-MPX detectors data

    Science.gov (United States)

    Bouchami, J.; Dallaire, F.; Gutiérrez, A.; Idarraga, J.; Král, V.; Leroy, C.; Picard, S.; Pospíšil, S.; Scallon, O.; Solc, J.; Suk, M.; Turecek, D.; Vykydal, Z.; Žemlièka, J.

    2011-01-01

    The ATLAS-MPX detectors are based on Medipix2 silicon devices designed by CERN for the detection of different types of radiation. These detectors are covered with converting layers of 6LiF and polyethylene (PE) to increase their sensitivity to thermal and fast neutrons, respectively. These devices allow the measurement of the composition and spectroscopic characteristics of the radiation field in ATLAS, particularly of neutrons. These detectors can operate in low or high preset energy threshold mode. The signature of particles interacting in a ATLAS-MPX detector at low threshold are clusters of adjacent pixels with different size and form depending on their type, energy and incidence angle. The classification of particles into different categories can be done using the geometrical parameters of these clusters. The Medipix analysis framework (MAFalda) — based on the ROOT application — allows the recognition of particle tracks left in ATLAS-MPX devices located at various positions in the ATLAS detector and cavern. The pattern recognition obtained from the application of MAFalda was configured to distinguish the response of neutrons from other radiation. The neutron response at low threshold is characterized by clusters of adjoining pixels (heavy tracks and heavy blobs) left by protons and heavy ions resulting from neutron interactions in the converting layers of the ATLAS-MPX devices. The neutron detection efficiency of ATLAS-MPX devices has been determined by the exposure of two detectors of reference to radionuclide sources of neutrons (252Cf and 241AmBe). With these results, an estimate of the neutrons fields produced at the devices locations during ATLAS operation was done.

  10. Estimation of mouse organ locations through registration of a statistical mouse atlas with micro-CT images.

    Science.gov (United States)

    Wang, Hongkai; Stout, David B; Chatziioannou, Arion F

    2012-01-01

    Micro-CT is widely used in preclinical studies of small animals. Due to the low soft-tissue contrast in typical studies, segmentation of soft tissue organs from noncontrast enhanced micro-CT images is a challenging problem. Here, we propose an atlas-based approach for estimating the major organs in mouse micro-CT images. A statistical atlas of major trunk organs was constructed based on 45 training subjects. The statistical shape model technique was used to include inter-subject anatomical variations. The shape correlations between different organs were described using a conditional Gaussian model. For registration, first the high-contrast organs in micro-CT images were registered by fitting the statistical shape model, while the low-contrast organs were subsequently estimated from the high-contrast organs using the conditional Gaussian model. The registration accuracy was validated based on 23 noncontrast-enhanced and 45 contrast-enhanced micro-CT images. Three different accuracy metrics (Dice coefficient, organ volume recovery coefficient, and surface distance) were used for evaluation. The Dice coefficients vary from 0.45 ± 0.18 for the spleen to 0.90 ± 0.02 for the lungs, the volume recovery coefficients vary from 0.96 ± 0.10 for the liver to 1.30 ± 0.75 for the spleen, the surface distances vary from 0.18 ± 0.01 mm for the lungs to 0.72 ± 0.42 mm for the spleen. The registration accuracy of the statistical atlas was compared with two publicly available single-subject mouse atlases, i.e., the MOBY phantom and the DIGIMOUSE atlas, and the results proved that the statistical atlas is more accurate than the single atlases. To evaluate the influence of the training subject size, different numbers of training subjects were used for atlas construction and registration. The results showed an improvement of the registration accuracy when more training subjects were used for the atlas construction. The statistical atlas-based registration was also compared with

  11. ATLAS Colouring Book

    CERN Multimedia

    Anthony, Katarina

    2016-01-01

    The ATLAS Experiment Colouring Book is a free-to-download educational book, ideal for kids aged 5-9. It aims to introduce children to the field of High-Energy Physics, as well as the work being carried out by the ATLAS Collaboration.

  12. The Drosophila melanogaster PeptideAtlas facilitates the use of peptide data for improved fly proteomics and genome annotation

    Directory of Open Access Journals (Sweden)

    King Nichole L

    2009-02-01

    Full Text Available Abstract Background Crucial foundations of any quantitative systems biology experiment are correct genome and proteome annotations. Protein databases compiled from high quality empirical protein identifications that are in turn based on correct gene models increase the correctness, sensitivity, and quantitative accuracy of systems biology genome-scale experiments. Results In this manuscript, we present the Drosophila melanogaster PeptideAtlas, a fly proteomics and genomics resource of unsurpassed depth. Based on peptide mass spectrometry data collected in our laboratory the portal http://www.drosophila-peptideatlas.org allows querying fly protein data observed with respect to gene model confirmation and splice site verification as well as for the identification of proteotypic peptides suited for targeted proteomics studies. Additionally, the database provides consensus mass spectra for observed peptides along with qualitative and quantitative information about the number of observations of a particular peptide and the sample(s in which it was observed. Conclusion PeptideAtlas is an open access database for the Drosophila community that has several features and applications that support (1 reduction of the complexity inherently associated with performing targeted proteomic studies, (2 designing and accelerating shotgun proteomics experiments, (3 confirming or questioning gene models, and (4 adjusting gene models such that they are in line with observed Drosophila peptides. While the database consists of proteomic data it is not required that the user is a proteomics expert.

  13. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.

  14. The design and performance of the ATLAS jet trigger

    International Nuclear Information System (INIS)

    Shimizu, Shima

    2014-01-01

    The ATLAS jet trigger is an important element of the event selection process, providing data samples for studies of Standard Model physics and searches for new physics at the LHC. The ATLAS jet trigger system has undergone substantial modifications over the past few years of LHC operations, as experience developed with triggering in a high luminosity and high event pileup environment. In particular, the region-of-interest based strategy has been replaced by a full scan of the calorimeter data at the third trigger level, and by a full scan of the level-1 trigger input at level-2 for some specific trigger chains. Hadronic calibration and cleaning techniques are applied in order to provide improved performance and increased stability in high luminosity data taking conditions. In this note we discuss the implementation and operational aspects of the ATLAS jet trigger during 2011 and 2012 data taking periods at the LHC.

  15. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is first roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit

  16. ATLAS Cloud R&D

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Love, P; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  17. ATLAS MPGD production status

    CERN Document Server

    Schioppa, Marco; The ATLAS collaboration

    2018-01-01

    Micromegas (MICRO MEsh GAseous Structure) chambers are Micro-Pattern Gaseous Detectors designed to provide a high spatial resolution and reasonable good time resolution in highly irradiated environments. In 2007 an ambitious long-term R\\&D activity was started in the context of the ATLAS experiment, at CERN: the Muon ATLAS Micromegas Activity (MAMMA). After years of tests on prototypes and technology breakthroughs, Micromegas chambers were chosen as tracking detectors for an upgrade of the ATLAS Muon Spectrometer. These novel detectors will be installed in 2020 at the end of the second long shutdown of the Large Hadron Collider, and will serve mainly as precision detectors in the innermost part of the forward ATLAS Muon Spectrometer. Four different types of Micromegas modules, eight layers each, up to $3 m^2$ area (of unprecedented size), will cover a surface of $150 m^2$ for a total active area of about $1200 m^2$. With this upgrade the ATLAS muon system will maintain the full acceptance of its excellent...

  18. ATLAS' major cooling project

    CERN Multimedia

    2005-01-01

    In 2005, a considerable effort has been put into commissioning the various units of ATLAS' complex cryogenic system. This is in preparation for the imminent cooling of some of the largest components of the detector in their final underground configuration. The liquid helium and nitrogen ATLAS refrigerators in USA 15. Cryogenics plays a vital role in operating massive detectors such as ATLAS. In many ways the liquefied argon, nitrogen and helium are the life-blood of the detector. ATLAS could not function without cryogens that will be constantly pumped via proximity systems to the superconducting magnets and subdetectors. In recent weeks compressors at the surface and underground refrigerators, dewars, pumps, linkages and all manner of other components related to the cryogenic system have been tested and commissioned. Fifty metres underground The helium and nitrogen refrigerators, installed inside the service cavern, are an important part of the ATLAS cryogenic system. Two independent helium refrigerators ...

  19. ATLAS: Exceeding all expectations

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    “One year ago it would have been impossible for us to guess that the machine and the experiments could achieve so much so quickly”, says Fabiola Gianotti, ATLAS spokesperson. The whole chain – from collision to data analysis – has worked remarkably well in ATLAS.   The first LHC proton run undoubtedly exceeded expectations for the ATLAS experiment. “ATLAS has worked very well since the beginning. Its overall data-taking efficiency is greater than 90%”, says Fabiola Gianotti. “The quality and maturity of the reconstruction and simulation software turned out to be better than we expected for this initial stage of the experiment. The Grid is a great success, and right from the beginning it has allowed members of the collaboration all over the world to participate in the data analysis in an effective and timely manner, and to deliver physics results very quickly”. In just a few months of data taking, ATLAS has observed t...

  20. SU-E-J-132: Automated Segmentation with Post-Registration Atlas Selection Based On Mutual Information

    International Nuclear Information System (INIS)

    Ren, X; Gao, H; Sharp, G

    2015-01-01

    Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to each chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)

  1. SU-E-J-132: Automated Segmentation with Post-Registration Atlas Selection Based On Mutual Information

    Energy Technology Data Exchange (ETDEWEB)

    Ren, X; Gao, H [Shanghai Jiao Tong University, Shanghai, Shanghai (China); Sharp, G [Massachusetts General Hospital, Boston, MA (United States)

    2015-06-15

    Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to each chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)

  2. Danish heat atlas as a support tool for energy system models

    International Nuclear Information System (INIS)

    Petrovic, Stefan N.; Karlsson, Kenneth B.

    2014-01-01

    Highlights: • The GIS method for calculating costs of district heating expansion is presented. • High socio-economic potential for district heating is identified within urban areas. • The method for coupling a heat atlas and TIMES optimization model is proposed. • Presented methods can be used for any geographical region worldwide. - Abstract: In the past four decades following the global oil crisis in 1973, Denmark has implemented remarkable changes in its energy sector, mainly due to the energy conservation measures on the demand side and the energy efficiency improvements on the supply side. Nowadays, the capital intensive infrastructure investments, such as the expansion of district heating networks and the introduction of significant heat saving measures require highly detailed decision-support tool. A Danish heat atlas provides highly detailed database with extensive information about more than 2.5 million buildings in Denmark. Energy system analysis tools incorporate environmental, economic, energy and engineering analysis of future energy systems and are considered crucial for the quantitative assessment of transitional scenarios towards future milestones, such as EU 2020 goals and Denmark’s goal of achieving fossil free society after 2050. The present paper shows how a Danish heat atlas can be used for providing inputs to energy system models, especially related to the analysis of heat saving measures within building stock and expansion of district heating networks. As a result, marginal cost curves are created, approximated and prepared for the use in optimization energy system model. Moreover, it is concluded that heat atlas can contribute as a tool for data storage and visualisation of results

  3. Future ATLAS Higgs Studies

    CERN Document Server

    Smart, Ben; The ATLAS collaboration

    2017-01-01

    The High-Luminosity LHC will prove a challenging environment to work in, with for example $=200$ expected. It will however also provide great opportunities for advancing studies of the Higgs boson. The ATLAS detector will be upgraded, and Higgs prospects analyses have been performed to assess the reach of ATLAS Higgs studies in the HL-LHC era. These analyses are presented, as are Run-2 ATLAS di-Higgs analyses for comparison.

  4. Baby brain atlases.

    Science.gov (United States)

    Oishi, Kenichi; Chang, Linda; Huang, Hao

    2018-04-03

    The baby brain is constantly changing due to its active neurodevelopment, and research into the baby brain is one of the frontiers in neuroscience. To help guide neuroscientists and clinicians in their investigation of this frontier, maps of the baby brain, which contain a priori knowledge about neurodevelopment and anatomy, are essential. "Brain atlas" in this review refers to a 3D-brain image with a set of reference labels, such as a parcellation map, as the anatomical reference that guides the mapping of the brain. Recent advancements in scanners, sequences, and motion control methodologies enable the creation of various types of high-resolution baby brain atlases. What is becoming clear is that one atlas is not sufficient to characterize the existing knowledge about the anatomical variations, disease-related anatomical alterations, and the variations in time-dependent changes. In this review, the types and roles of the human baby brain MRI atlases that are currently available are described and discussed, and future directions in the field of developmental neuroscience and its clinical applications are proposed. The potential use of disease-based atlases to characterize clinically relevant information, such as clinical labels, in addition to conventional anatomical labels, is also discussed. Copyright © 2018. Published by Elsevier Inc.

  5. ATLAS Cloud R&D

    Science.gov (United States)

    Panitkin, Sergey; Barreiro Megino, Fernando; Caballero Bejar, Jose; Benjamin, Doug; Di Girolamo, Alessandro; Gable, Ian; Hendrix, Val; Hover, John; Kucharczyk, Katarzyna; Medrano Llamas, Ramon; Love, Peter; Ohman, Henrik; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Walker, Rodney; Zaytsev, Alexander; Atlas Collaboration

    2014-06-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  6. The Irish Wind Atlas

    Energy Technology Data Exchange (ETDEWEB)

    Watson, R [Univ. College Dublin, Dept. of Electronic and Electrical Engineering, Dublin (Ireland); Landberg, L [Risoe National Lab., Meteorology and Wind Energy Dept., Roskilde (Denmark)

    1999-03-01

    The development work on the Irish Wind Atlas is nearing completion. The Irish Wind Atlas is an updated improved version of the Irish section of the European Wind Atlas. A map of the irish wind resource based on a WA{sup s}P analysis of the measured data and station description of 27 measuring stations is presented. The results of previously presented WA{sup s}P/KAMM runs show good agreement with these results. (au)

  7. O Livro de Colorir da Experiência ATLAS - ATLAS Experiment Colouring Book in Portuguese

    CERN Multimedia

    Anthony, Katarina

    2017-01-01

    Language: Portuguese - The ATLAS Experiment Colouring Book is a free-to-download educational book, ideal for kids aged 5-9. It aims to introduce children to the field of High-Energy Physics, as well as the work being carried out by the ATLAS Collaboration. Língua: Português - O Livro de Colorir da Experiência ATLAS é um livro educacional gratuito para descarregar, ideal para crianças dos 5 aos 9 anos de idade. Este livro procura introduzir as crianças ao estudo da Física de Alta-Energia, bem como ao trabalho desenvolvido pela Colaboração ATLAS.

  8. Maľovanka Experiment ATLAS - ATLAS Experiment Colouring Book in Slovak

    CERN Multimedia

    Anthony, Katarina

    2017-01-01

    Language: Slovak - The ATLAS Experiment Colouring Book is a free-to-download educational book, ideal for kids aged 5-9. It aims to introduce children to the field of High-Energy Physics, as well as the work being carried out by the ATLAS Collaboration.

  9. ATLAS Deneyi Boyama Kitabı - ATLAS Experiment Colouring Book in Turkish

    CERN Multimedia

    Anthony, Katarina

    2018-01-01

    Language: Turkish - The ATLAS Experiment Colouring Book is a free-to-download educational book, ideal for kids aged 5-9. It aims to introduce children to the field of High-Energy Physics, as well as the work being carried out by the ATLAS Collaboration.

  10. AGIS: The ATLAS Grid Information System

    Science.gov (United States)

    Anisenkov, A.; Di Girolamo, A.; Klimentov, A.; Oleynik, D.; Petrosyan, A.; Atlas Collaboration

    2014-06-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produced petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we describe the ATLAS Grid Information System (AGIS), designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  11. The Cerefy Neuroradiology Atlas: a Talairach-Tournoux atlas-based tool for analysis of neuroimages available over the internet.

    Science.gov (United States)

    Nowinski, Wieslaw L; Belov, Dmitry

    2003-09-01

    The article introduces an atlas-assisted method and a tool called the Cerefy Neuroradiology Atlas (CNA), available over the Internet for neuroradiology and human brain mapping. The CNA contains an enhanced, extended, and fully segmented and labeled electronic version of the Talairach-Tournoux brain atlas, including parcelated gyri and Brodmann's areas. To our best knowledge, this is the first online, publicly available application with the Talairach-Tournoux atlas. The process of atlas-assisted neuroimage analysis is done in five steps: image data loading, Talairach landmark setting, atlas normalization, image data exploration and analysis, and result saving. Neuroimage analysis is supported by a near-real-time, atlas-to-data warping based on the Talairach transformation. The CNA runs on multiple platforms; is able to process simultaneously multiple anatomical and functional data sets; and provides functions for a rapid atlas-to-data registration, interactive structure labeling and annotating, and mensuration. It is also empowered with several unique features, including interactive atlas warping facilitating fine tuning of atlas-to-data fit, navigation on the triplanar formed by the image data and the atlas, multiple-images-in-one display with interactive atlas-anatomy-function blending, multiple label display, and saving of labeled and annotated image data. The CNA is useful for fast atlas-assisted analysis of neuroimage data sets. It increases accuracy and reduces time in localization analysis of activation regions; facilitates to communicate the information on the interpreted scans from the neuroradiologist to other clinicians and medical students; increases the neuroradiologist's confidence in terms of anatomy and spatial relationships; and serves as a user-friendly, public domain tool for neuroeducation. At present, more than 700 users from five continents have subscribed to the CNA.

  12. Preparing a new book on ATLAS

    CERN Multimedia

    Claudia Marcelloni de Oliveira

    A book about the ATLAS project and the ATLAS collaboration is going to be published and available for sale in mid 2008. The book is intended to be a symbol of appreciation for all the people from ATLAS institutes, triggering fond memories through photos, interviews, short commentaries and anecdotes about the daily life and milestones encountered while designing, constructing and completing ATLAS. We would like to give you the opportunity to collaborate with this project in two different ways: Firstly, please send us the best anecdotes related to ATLAS that you remember. To submit anecdotes, send an email to Claudia.Marcelloni@cern.ch. Secondly, you are invited to participate in our PHOTO COMPETITION. Please send the best photos you have of ATLAS attached with a description, the location, and date taken. The categories are: Milestones in the process of designing and building the detector, People at work and Important gatherings. To submit photos you should go to the CDS page and select ATLAS Photo Competi...

  13. The importance of having an appropriate relational data segmentation in ATLAS

    CERN Document Server

    Dimitrov, Gancho; The ATLAS collaboration

    2015-01-01

    In this paper we describe specific technical solutions put in place in various database applications of the ATLAS experiment at LHC where we make use of several partitioning techniques available in Oracle 11g. With the broadly used range partitioning and its option of automatic interval partitioning we add our own logic in PLSQL procedures and scheduler jobs to sustain data sliding windows in order to enforce various data retention policies. We also make use of the new Oracle 11g reference partitioning in the Nightly Build System to achieve uniform data segmentation. However the most challenging issue was to segment the data of the new ATLAS Distributed Data Management system (Rucio), which resulted in tens of thousands list type partitions and sub-partitions. Partition and sub-partition management, index strategy, statistics gathering and queries execution plan stability are important factors when choosing an appropriate physical model for the application data management. The so-far accumulated knowledge and...

  14. ATLAS B-physics potential

    International Nuclear Information System (INIS)

    Smizanska, M.

    2001-01-01

    Studies since 1993 have demonstrated the ability of ATLAS to pursue a wide B physics program. This document presents the latest performance studies with special stress on lepton identification. B-decays containing several leptons in ATLAS statistically dominate the high-precision measurements. We present new results on physics simulations of CP violation measurements in the B s 0 → J/Ψphi decay and on a novel ATLAS programme on beauty production in central proton-proton collisions of LHC

  15. A revised design and implementation of the ATLAS Log Service package

    Science.gov (United States)

    Murillo Garcia, Raul; Lehamnn Miotto, Giovanna; ATLAS TDAQ Collaboration

    2011-12-01

    This paper presents a revised design and implementation of the Log Service for the ATLAS Trigger and Data Acquisition (TDAQ) framework at CERN. A previous version of this utility was rarely used for various reasons, herein explained. The lessons learned set the grounds and motivation for a new redesign. The Log Service consists of the Logger, the entity that collects logs and stores them in an Oracle database; a set of user utilities to access and maintain the database; and a Java based tool, known as the Log Manager, which provides a compact and intuitive interface for browsing the log messages based on a user defined search criteria. The outline of these software components are explained, including various optimization techniques deployed in order to handle the large volume of entries expected to be stored in the database. Finally, a performance study has been conducted to prove the validity and behavior of the Log Service.

  16. A revised design and implementation of the ATLAS Log Service package

    CERN Document Server

    Murillo García, R; The ATLAS collaboration

    2010-01-01

    This paper presents a revised design and implementation of the Log Service for the ATLAS Trigger and Data Acquisition (TDAQ) framework at CERN. A previous version of this utility was rarely used for various reasons, herein explained. The lessons learned set the grounds and motivation for a new redesign. The Log Service consists of the Logger, the entity that collects logs and stores them in an Oracle database; a set of user utilities to access and maintain the database; and a Java based tool, known as the Log Manager, which provides a compact and intuitive interface for browsing the log messages based on a user defined search criteria. The outline of these software components are explained, including various optimization techniques deployed in order to handle the large volume of entries expected to be stored in the database. Finally, a performance study has been conducted to prove the validity and behavior of the Log Service.

  17. The 3rd ATLAS Domestic Standard Problem for Improvement of Safety Analysis Technology

    International Nuclear Information System (INIS)

    Choi, Ki-Yong; Kang, Kyoung-Ho; Park, Yusun; Kim, Jongrok; Bae, Byoung-Uhn; Choi, Nam-Hyun

    2014-01-01

    The third ATLAS DSP (domestic standard problem exercise) was launched at the end of 2012 in response to the strong need for continuation of the ATLAS DSP. A guillotine break of a main steam line without LOOP at a zero power condition was selected as a target scenario, and it was successfully completed in the beginning of 2014. In the 3 rd ATLAS DSP, comprehensive utilization of the integral effect test data was made by dividing analysis with three topics; 1. scale-up where extrapolation of ATLAS IET data was investigated 2. 3D analysis where how much improvement can be obtained by 3D modeling was studied 3. 1D sensitivity analysis where the key phenomena affecting the SLB simulation were identified and the best modeling guideline was achieved. Through such DSP exercises, it has been possible to effectively utilize high-quality ATLAS experimental data of to enhance thermal-hydraulic understanding and to validate the safety analysis codes. A strong human network and technical expertise sharing among the various nuclear experts are also important outcomes from this program

  18. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Mashinistov, Ruslan; Belyaev, Nikita; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at high occupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualisation tools and more. WLCG...

  19. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Belyaev, Nikita; Mashinistov, Ruslan; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at highoccupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualization tools and more. WLCG ...

  20. Rate Predictions and Trigger/DAQ Resource Monitoring in ATLAS

    CERN Document Server

    Schaefer, D M; The ATLAS collaboration

    2012-01-01

    Since starting in 2010, the Large Hadron Collider (LHC) has pro- duced collisions at an ever increasing rate. The ATLAS experiment successfully records the collision data with high eciency and excel- lent data quality. Events are selected using a three-level trigger system, where each level makes a more re ned selection. The level-1 trigger (L1) consists of a custom-designed hardware trigger which seeds two higher software based trigger levels. Over 300 triggers compose a trig- ger menu which selects physics signatures such as electrons, muons, particle jets, etc. Each trigger consumes computing resources of the ATLAS trigger system and oine storage. The LHC instantaneous luminosity conditions, desired physics goals of the collaboration, and the limits of the trigger infrastructure determine the composition of the ATLAS trigger menu. We describe a trigger monitoring frame- work for computing the costs of individual trigger algorithms such as data request rates and CPU consumption. This framework has been used...

  1. Development and Test of the Cooling System for the ATLAS Hadron Tile Calorimeter

    CERN Document Server

    Schlager, Gerolf

    2002-01-01

    The ATLAS detector is a general-purpose experiment for proton-proton collisions designed to investigate the full range of physical processes at the Large Hadron Collider (LHC). The ATLAS Tile Hadron Calorimeter is designed to measure the energies of jets with a resolution of E/E = 50%/pE 3%, for j j<3. This thesis presents the detailed studies which were carried out with prototypes of the Tilecal cooling system during my year as technical student at CERN. The results will be used to validate and to determine the nal design of the cooling system of the ATLAS Tile calorimeter. The performance of the cooling unit built for the calibration of Tilecal modules was evaluated for various parameters like temperature stability and safety conditions during operation. Additionally I contributed to the analysis of the calorimeter response for di erent cooling temperatures. These results determined the constraints on the operation conditions of the cooling system in terms of temperature stability that will be needed d...

  2. ATLAS Award for Shield Supplier

    CERN Multimedia

    2004-01-01

    ATLAS technical coordinator Dr. Marzio Nessi presents the ATLAS supplier award to Vojtech Novotny, Director General of Skoda Hute.On 3 November, the ATLAS experiment honoured one of its suppliers, Skoda Hute s.r.o., of Plzen, Czech Republic, for their work on the detector's forward shielding elements. These huge and very massive cylinders surround the beampipe at either end of the detector to block stray particles from interfering with the ATLAS's muon chambers. For the shields, Skoda Hute produced 10 cast iron pieces with a total weight of 780 tonnes at a cost of 1.4 million CHF. Although there are many iron foundries in the CERN member states, there are only a limited number that can produce castings of the necessary size: the large pieces range in weight from 59 to 89 tonnes and are up to 1.5 metres thick.The forward shielding was designed by the ATLAS Technical Coordination in close collaboration with the ATLAS groups from the Czech Technical University and Charles University in Prague. The Czech groups a...

  3. ATLAS Facility Description Report

    International Nuclear Information System (INIS)

    Kang, Kyoung Ho; Moon, Sang Ki; Park, Hyun Sik; Cho, Seok; Choi, Ki Yong

    2009-04-01

    A thermal-hydraulic integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been constructed at KAERI (Korea Atomic Energy Research Institute). The ATLAS has the same two-loop features as the APR1400 and is designed according to the well-known scaling method suggested by Ishii and Kataoka to simulate the various test scenarios as realistically as possible. It is a half-height and 1/288-volume scaled test facility with respect to the APR1400. The fluid system of the ATLAS consists of a primary system, a secondary system, a safety injection system, a break simulating system, a containment simulating system, and auxiliary systems. The primary system includes a reactor vessel, two hot legs, four cold legs, a pressurizer, four reactor coolant pumps, and two steam generators. The secondary system of the ATLAS is simplified to be of a circulating loop-type. Most of the safety injection features of the APR1400 and the OPR1000 are incorporated into the safety injection system of the ATLAS. In the ATLAS test facility, about 1300 instrumentations are installed to precisely investigate the thermal-hydraulic behavior in simulation of the various test scenarios. This report describes the scaling methodology, the geometric data of the individual component, and the specification and the location of the instrumentations in detail

  4. TU-CD-BRA-05: Atlas Selection for Multi-Atlas-Based Image Segmentation Using Surrogate Modeling

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: The growing size and heterogeneity in training atlas necessitates sophisticated schemes to identify only the most relevant atlases for the specific multi-atlas-based image segmentation problem. This study aims to develop a model to infer the inaccessible oracle geometric relevance metric from surrogate image similarity metrics, and based on such model, provide guidance to atlas selection in multi-atlas-based image segmentation. Methods: We relate the oracle geometric relevance metric in label space to the surrogate metric in image space, by a monotonically non-decreasing function with additive random perturbations. Subsequently, a surrogate’s ability to prognosticate the oracle order for atlas subset selection is quantified probabilistically. Finally, important insights and guidance are provided for the design of fusion set size, balancing the competing demands to include the most relevant atlases and to exclude the most irrelevant ones. A systematic solution is derived based on an optimization framework. Model verification and performance assessment is performed based on clinical prostate MR images. Results: The proposed surrogate model was exemplified by a linear map with normally distributed perturbation, and verified with several commonly-used surrogates, including MSD, NCC and (N)MI. The derived behaviors of different surrogates in atlas selection and their corresponding performance in ultimate label estimate were validated. The performance of NCC and (N)MI was similarly superior to MSD, with a 10% higher atlas selection probability and a segmentation performance increase in DSC by 0.10 with the first and third quartiles of (0.83, 0.89), compared to (0.81, 0.89). The derived optimal fusion set size, valued at 7/8/8/7 for MSD/NCC/MI/NMI, agreed well with the appropriate range [4, 9] from empirical observation. Conclusion: This work has developed an efficacious probabilistic model to characterize the image-based surrogate metric on atlas selection

  5. High-Performance Scalable Information Service for the ATLAS Experiment

    International Nuclear Information System (INIS)

    Kolos, S; Boutsioukis, G; Hauser, R

    2012-01-01

    The ATLAS[1] experiment is operated by a highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ)[2] project. The IS provides a high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about a hundred gigabytes of information which is being constantly updated with the update interval varying from a second to a few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In the latter case IS subscribers receive information within a few milliseconds after it was updated. IS can handle arbitrary types of information, including histograms produced by the HLT applications, and provides C++, Java and Python API. The Information Service is a unique source of information for the majority of the online monitoring analysis and GUI applications used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with a latency of a few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Each information

  6. Wind Atlas for South Africa (WASA). Report on Measurements

    DEFF Research Database (Denmark)

    Mabille, Eugéne; Prinsloo, Eric; Mortensen, Niels Gylling

    , to verify the results of the meso-scale modelling. The Measurements work package (WP2) is one of six work packages that collectively make up the Wind Atlas for South Africa (WASA) project. The measurements also provide observed wind climates at the measurement sites, which can be used by micrositing...... to be commissioned was WM06 (Sutherland) and this was completed on 17 September 2010. The outputs of WP2 are: i. Establish 10 high quality wind measurement stations providing three years of measurement data for calibration of the mesoscale modelling. ii. A database system for wind data collection and on-line Web...

  7. Taking ATLAS to new heights

    CERN Document Server

    Abha Eli Phoboo, ATLAS experiment

    2013-01-01

    Earlier this month, 51 members of the ATLAS collaboration trekked up to the highest peak in the Atlas Mountains, Mt. Toubkal (4,167m), in North Africa.    The physicists were in Marrakech, Morocco, attending the ATLAS Overview Week (7 - 11 October), which was held for the first time on the African continent. Around 300 members of the collaboration met to discuss the status of the LS1 upgrades and plans for the next run of the LHC. Besides the trek, 42 ATLAS members explored the Saharan sand dunes of Morocco on camels.  Photos courtesy of Patrick Jussel.

  8. ATLAS DataFlow Infrastructure: Recent results from ATLAS cosmic and first-beam data-taking

    Energy Technology Data Exchange (ETDEWEB)

    Vandelli, Wainer, E-mail: wainer.vandelli@cern.c

    2010-04-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented test-bed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its flexibility and contributed in understanding its limitations. Moreover, the integration with the detector and the interfacing with the off-line data processing and management have been able to take advantage of this extended data taking-period as well. In this paper we report on the usage of the DataFlow infrastructure during the ATLAS data-taking. These results, backed-up by complementary performance tests, validate the architecture of the ATLAS DataFlow and prove that the system is robust, flexible and scalable enough to cope with the final requirements of the ATLAS experiment.

  9. ATLAS DataFlow Infrastructure: Recent results from ATLAS cosmic and first-beam data-taking

    International Nuclear Information System (INIS)

    Vandelli, Wainer

    2010-01-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented test-bed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its flexibility and contributed in understanding its limitations. Moreover, the integration with the detector and the interfacing with the off-line data processing and management have been able to take advantage of this extended data taking-period as well. In this paper we report on the usage of the DataFlow infrastructure during the ATLAS data-taking. These results, backed-up by complementary performance tests, validate the architecture of the ATLAS DataFlow and prove that the system is robust, flexible and scalable enough to cope with the final requirements of the ATLAS experiment.

  10. Brief retrospection on Hungarian school atlases

    Science.gov (United States)

    Klinghammer, István; Jesús Reyes Nuñez, José

    2018-05-01

    The first part of this article is dedicated to the history of Hungarian school atlases to the end of the 1st World War. Although the first maps included in a Hungarian textbook were probably made in 1751, the publication of atlases for schools is dated almost 50 years later, when professor Ézsáiás Budai created his "New School Atlas for elementary pupils" in 1800. This was followed by a long period of 90 years, when the school atlases were mostly translations and adaptations of foreign atlases, the majority of which were made in German-speaking countries. In those years, a school atlas made by a Hungarian astronomer, Antal Vállas, should be highlighted as a prominent independent piece of work. In 1890, a talented cartographer, Manó Kogutowicz founded the Hungarian Geographical Institute, which was the institution responsible for producing school atlases for the different types of schools in Hungary. The professional quality of the school atlases published by his institute was also recognized beyond the Hungarian borders by prizes won in international exhibitions. Kogutowicz laid the foundations of the current Hungarian school cartography: this statement is confirmed in the second part of this article, when three of his school atlases are presented in more detail to give examples of how the pupils were introduced to the basic cartographic and astronomic concepts as well as how different innovative solutions were used on the maps.

  11. Multi-atlas attenuation correction supports full quantification of static and dynamic brain PET data in PET-MR

    Science.gov (United States)

    Mérida, Inés; Reilhac, Anthonin; Redouté, Jérôme; Heckemann, Rolf A.; Costes, Nicolas; Hammers, Alexander

    2017-04-01

    In simultaneous PET-MR, attenuation maps are not directly available. Essential for absolute radioactivity quantification, they need to be derived from MR or PET data to correct for gamma photon attenuation by the imaged object. We evaluate a multi-atlas attenuation correction method for brain imaging (MaxProb) on static [18F]FDG PET and, for the first time, on dynamic PET, using the serotoninergic tracer [18F]MPPF. A database of 40 MR/CT image pairs (atlases) was used. The MaxProb method synthesises subject-specific pseudo-CTs by registering each atlas to the target subject space. Atlas CT intensities are then fused via label propagation and majority voting. Here, we compared these pseudo-CTs with the real CTs in a leave-one-out design, contrasting the MaxProb approach with a simplified single-atlas method (SingleAtlas). We evaluated the impact of pseudo-CT accuracy on reconstructed PET images, compared to PET data reconstructed with real CT, at the regional and voxel levels for the following: radioactivity images; time-activity curves; and kinetic parameters (non-displaceable binding potential, BPND). On static [18F]FDG, the mean bias for MaxProb ranged between 0 and 1% for 73 out of 84 regions assessed, and exceptionally peaked at 2.5% for only one region. Statistical parametric map analysis of MaxProb-corrected PET data showed significant differences in less than 0.02% of the brain volume, whereas SingleAtlas-corrected data showed significant differences in 20% of the brain volume. On dynamic [18F]MPPF, most regional errors on BPND ranged from -1 to  +3% (maximum bias 5%) for the MaxProb method. With SingleAtlas, errors were larger and had higher variability in most regions. PET quantification bias increased over the duration of the dynamic scan for SingleAtlas, but not for MaxProb. We show that this effect is due to the interaction of the spatial tracer-distribution heterogeneity variation over time with the degree of accuracy of the attenuation maps. This

  12. A high-resolution anatomical atlas of the transcriptome in the mouse embryo.

    Directory of Open Access Journals (Sweden)

    Graciana Diez-Roux

    Full Text Available Ascertaining when and where genes are expressed is of crucial importance to understanding or predicting the physiological role of genes and proteins and how they interact to form the complex networks that underlie organ development and function. It is, therefore, crucial to determine on a genome-wide level, the spatio-temporal gene expression profiles at cellular resolution. This information is provided by colorimetric RNA in situ hybridization that can elucidate expression of genes in their native context and does so at cellular resolution. We generated what is to our knowledge the first genome-wide transcriptome atlas by RNA in situ hybridization of an entire mammalian organism, the developing mouse at embryonic day 14.5. This digital transcriptome atlas, the Eurexpress atlas (http://www.eurexpress.org, consists of a searchable database of annotated images that can be interactively viewed. We generated anatomy-based expression profiles for over 18,000 coding genes and over 400 microRNAs. We identified 1,002 tissue-specific genes that are a source of novel tissue-specific markers for 37 different anatomical structures. The quality and the resolution of the data revealed novel molecular domains for several developing structures, such as the telencephalon, a novel organization for the hypothalamus, and insight on the Wnt network involved in renal epithelial differentiation during kidney development. The digital transcriptome atlas is a powerful resource to determine co-expression of genes, to identify cell populations and lineages, and to identify functional associations between genes relevant to development and disease.

  13. The ShakeMap Atlas for the City of Naples, Italy

    Science.gov (United States)

    Pierdominici, Simona; Faenza, Licia; Camassi, Romano; Michelini, Alberto; Ercolani, Emanuela; Lauciani, Valentino

    2016-04-01

    Naples is one of the most vulnerable cities in the world because it is threatened by several natural and man-made hazards: earthquakes, volcanic eruptions, tsunamis, landslides, hydrogeological disasters, and morphologic alterations due to human interference. In addition, the risk is increased by the high density of population (Naples and the surrounding area are among the most populated in Italy), and by the type and condition of buildings and monuments. In light of this, it is crucial to assess the ground shaking suffered by the city. We take into account and integrate data information from five Italian databases and catalogues (DBMI11; CPTI11; CAMAL11; MOLAL08; ITACA) to build a reliable ShakeMap atlas for the area and to recreate the seismic history of the city from historical to recent times (1293 to 1999). This large amount of data gives the opportunity to explore several sources of information, expanding the completeness of our data set in both time and magnitude. 84 earthquakes have been analyzed and for each event, a Shakemap set has been computed using an ad hoc implementation developed for this application: (1) specific ground-motion prediction equations (GMPEs) accounting for the different attenuation properties in volcanic areas compared with the tectonic ones, and (2) detailed local microzonation to include the site effects. The ShakeMap atlas has two main applications: a) it is an important instrument in seismic risk management. It quantifies the level of shaking suffered by a city during its history, and it could be implemented to the quantification of the number of people exposed to certain degrees of shaking. Intensity data provide the evaluation of the damage caused by earthquakes; the damage is closely linked with the ground shaking, building type, and vulnerability, and it is not possible to separate these contributions; b) the Atlas can be used as starting point for Bayesian estimation of seismic hazard. This technique allows for the merging

  14. ATLAS Maintenance and Operation management system

    CERN Document Server

    Copy, B

    2007-01-01

    The maintenance and operation of the ATLAS detector will involve thousands of contributors from 170 physics institutes. Planning and coordinating the action of ATLAS members, ensuring their expertise is properly leveraged and that no parts of the detector are understaffed or overstaffed will be a challenging task. The ATLAS Maintenance and Operation application (referred to as Operation Task Planner inside the ATLAS experiment) offers a fluent web based interface that combines the flexibility and comfort of a desktop application, intuitive data visualization and navigation techniques, with a lightweight service oriented architecture. We will review the application, its usage within the ATLAS experiment, its underlying design and implementation.

  15. Taus at ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Demers, Sarah M. [Yale Univ., New Haven, CT (United States). Dept. of Physics

    2017-12-06

    The grant "Taus at ATLAS" supported the group of Sarah Demers at Yale University over a period of 8.5 months, bridging the time between her Early Career Award and her inclusion on Yale's grant cycle within the Department of Energy's Office of Science. The work supported the functioning of the ATLAS Experiment at CERN's Large Hadron Collider and the analysis of ATLAS data. The work included searching for the Higgs Boson in a particular mode of its production (with a W or Z boson) and decay (to a pair of tau leptons.) This was part of a broad program of characterizing the Higgs boson as we try to understand this recently discovered particle, and whether or not it matches our expectations within the current standard model of particle physics. In addition, group members worked with simulation to understand the physics reach of planned upgrades to the ATLAS experiment. Supported group members include postdoctoral researcher Lotte Thomsen and graduate student Mariel Pettee.

  16. Soft QCD at CMS and ATLAS

    CERN Document Server

    Starovoitov, Pavel; The ATLAS collaboration

    2018-01-01

    A short overview of the recent soft QCD results from the ATLAS and CMS collaborations is presented. The inelastic cross section measurement by CMS at 13 TeV is summarised. The contribution of the diffractive processes to the very forward photon spectra studied by ATLAS and LHCf is discussed. The ATLAS measurements of the exclusive two-photon production of the muon pairs is presented and compared to the previous ATLAS and CMS results.

  17. AGIS: The ATLAS Grid Information System

    OpenAIRE

    Anisenkov, Alexey; Belov, Sergey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-01-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configurat...

  18. ATLAS B-physics potential

    CERN Document Server

    Smizanska, M

    2001-01-01

    Studies since 1993 have demonstrated the ability of ATLAS to pursue a wide B physics program. This document presents the latest performance studies with special stress on lepton identification. B-decays containing several leptons in ATLAS statistically dominate the high- precision measurements. We present new results on physics simulations of CP violation measurements in the B/sub s//sup 0/ to J/ psi phi decay and on a novel ATLAS programme on beauty production in central proton-proton collisions at the LHC. (7 refs).

  19. The next generation of the ATLAS PanDA Monitoring System

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Klimentov, A; Love, P; Potekhin, M; Wenaus, T

    2014-01-01

    For many years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, with up to 1M completed jobs/day in 2013. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. Outside of ATLAS, the PanDA system is also being used in projects like AMS, LSST and a few others. It currently undergoes a significant redesign, both of the core server components responsible for workload management, brokerage and data access, and of the monitoring part, which is critically important for efficient execution of the workflow in a way that’s transparent to the user and also provides an effective set of tools for operational support. The new generation of the PanDA Monitoring Service is designed based on a proven, scalable, industry-standard Web Fr...

  20. The NeuARt II system: a viewing tool for neuroanatomical data based on published neuroanatomical atlases

    Directory of Open Access Journals (Sweden)

    Cheng Wei-Cheng

    2006-12-01

    Full Text Available Abstract Background Anatomical studies of neural circuitry describing the basic wiring diagram of the brain produce intrinsically spatial, highly complex data of great value to the neuroscience community. Published neuroanatomical atlases provide a spatial framework for these studies. We have built an informatics framework based on these atlases for the representation of neuroanatomical knowledge. This framework not only captures current methods of anatomical data acquisition and analysis, it allows these studies to be collated, compared and synthesized within a single system. Results We have developed an atlas-viewing application ('NeuARt II' in the Java language with unique functional properties. These include the ability to use copyrighted atlases as templates within which users may view, save and retrieve data-maps and annotate them with volumetric delineations. NeuARt II also permits users to view multiple levels on multiple atlases at once. Each data-map in this system is simply a stack of vector images with one image per atlas level, so any set of accurate drawings made onto a supported atlas (in vector graphics format could be uploaded into NeuARt II. Presently the database is populated with a corpus of high-quality neuroanatomical data from the laboratory of Dr Larry Swanson (consisting 64 highly-detailed maps of PHAL tract-tracing experiments, made up of 1039 separate drawings that were published in 27 primary research publications over 17 years. Herein we take selective examples from these data to demonstrate the features of NeuArt II. Our informatics tool permits users to browse, query and compare these maps. The NeuARt II tool operates within a bioinformatics knowledge management platform (called 'NeuroScholar' either as a standalone or a plug-in application. Conclusion Anatomical localization is fundamental to neuroscientific work and atlases provide an easily-understood framework that is widely used by neuroanatomists and non

  1. ATLAS. LHC experiments

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    In Greek mythology, Atlas was a Titan who had to hold up the heavens with his hands as a punishment for having taken part in a revolt against the Olympians. For LHC, the ATLAS detector will also have an onerous physics burden to bear, but this is seen as a golden opportunity rather than a punishment. The major physics goal of CERN's LHC proton-proton collider is the quest for the long-awaited£higgs' mechanism which drives the spontaneous symmetry breaking of the electroweak Standard Model picture. The large ATLAS collaboration proposes a large general-purpose detector to exploit the full discovery potential of LHC's proton collisions. LHC will provide proton-proton collision luminosities at the aweinspiring level of 1034 cm2 s~1, with initial running in at 1033. The ATLAS philosophy is to handle as many signatures as possible at all luminosity levels, with the initial running providing more complex possibilities. The ATLAS concept was first presented as a Letter of Intent to the LHC Committee in November 1992. Following initial presentations at the Evian meeting (Towards the LHC Experimental Programme') in March of that year, two ideas for generalpurpose detectors, the ASCOT and EAGLE schemes, merged, with Friedrich Dydak (MPI Munich) and Peter Jenni (CERN) as ATLAS cospokesmen. Since the initial Letter of Intent presentation, the ATLAS design has been optimized and developed, guided by physics performance studies and the LHC-oriented detector R&D programme (April/May, page 3). The overall detector concept is characterized by an inner superconducting solenoid (for inner tracking) and large superconducting air-core toroids outside the calorimetry. This solution avoids constraining the calorimetry while providing a high resolution, large acceptance and robust detector. The outer magnet will extend over a length of 26 metres, with an outer diameter of almost 20 metres. The total weight of the detector is 7,000 tonnes. Fitted with its end

  2. Low-complexity atlas-based prostate segmentation by combining global, regional, and local metrics

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Qiuliang; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California Los Angeles, California 90095 (United States)

    2014-04-15

    Purpose: To improve the efficiency of atlas-based segmentation without compromising accuracy, and to demonstrate the validity of the proposed method on MRI-based prostate segmentation application. Methods: Accurate and efficient automatic structure segmentation is an important task in medical image processing. Atlas-based methods, as the state-of-the-art, provide good segmentation at the cost of a large number of computationally intensive nonrigid registrations, for anatomical sites/structures that are subject to deformation. In this study, the authors propose to utilize a combination of global, regional, and local metrics to improve the accuracy yet significantly reduce the number of required nonrigid registrations. The authors first perform an affine registration to minimize the global mean squared error (gMSE) to coarsely align each atlas image to the target. Subsequently, atarget-specific regional MSE (rMSE), demonstrated to be a good surrogate for dice similarity coefficient (DSC), is used to select a relevant subset from the training atlas. Only within this subset are nonrigid registrations performed between the training images and the target image, to minimize a weighted combination of gMSE and rMSE. Finally, structure labels are propagated from the selected training samples to the target via the estimated deformation fields, and label fusion is performed based on a weighted combination of rMSE and local MSE (lMSE) discrepancy, with proper total-variation-based spatial regularization. Results: The proposed method was applied to a public database of 30 prostate MR images with expert-segmented structures. The authors’ method, utilizing only eight nonrigid registrations, achieved a performance with a median/mean DSC of over 0.87/0.86, outperforming the state-of-the-art full-fledged atlas-based segmentation approach of which the median/mean DSC was 0.84/0.82 when applying to their data set. Conclusions: The proposed method requires a fixed number of nonrigid

  3. The ATLAS DDM Tracer monitoring framework

    International Nuclear Information System (INIS)

    Zang Dongsong; Garonne, Vincent; Barisits, Martin; Lassnig, Mario; Andrew Stewart, Graeme; Molfetas, Angelos; Beermann, Thomas

    2012-01-01

    The DDM Tracer monitoring framework is aimed to trace and monitor the ATLAS file operations on the Worldwide LHC Computing Grid. The volume of traces has increased significantly since the framework was put in production in 2009. Now there are about 5 million trace messages every day and peaks can be near 250Hz, with peak rates continuing to climb, which gives the current structure a big challenge. Analysis of large datasets based on on-demand queries to the relational database management system (RDBMS), i.e. Oracle, can be problematic, and have a significant effect on the database's performance. Consequently, We have investigated some new high availability technologies like messaging infrastructure, specifically ActiveMQ, and key-value stores. The advantages of key value store technology are that they are distributed and have high scalability; also their write performances are usually much better than RDBMS, all of which are very useful for the Tracer monitoring framework. Indexes and distributed counters have been also tested to improve query performance and provided almost real time results. In this paper, the design principles, architecture and main characteristics of Tracer monitoring framework will be described and examples of its usage will be presented.

  4. ATLAS Grid Workflow Performance Optimization

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration

    2018-01-01

    The CERN ATLAS experiment grid workflow system manages routinely 250 to 500 thousand concurrently running production and analysis jobs to process simulation and detector data. In total more than 300 PB of data is distributed over more than 150 sites in the WLCG. At this scale small improvements in the software and computing performance and workflows can lead to significant resource usage gains. ATLAS is reviewing together with CERN IT experts several typical simulation and data processing workloads for potential performance improvements in terms of memory and CPU usage, disk and network I/O. All ATLAS production and analysis grid jobs are instrumented to collect many performance metrics for detailed statistical studies using modern data analytics tools like ElasticSearch and Kibana. This presentation will review and explain the performance gains of several ATLAS simulation and data processing workflows and present analytics studies of the ATLAS grid workflows.

  5. EnviroAtlas - Ecosystem Service Market and Project Locations, U.S., 2015, Forest Trends' Ecosystem Marketplace

    Science.gov (United States)

    This EnviroAtlas dataset contains points depicting the location of market-based programs, referred to herein as markets, and projects addressing ecosystem services protection in the United States. The data were collected via surveys and desk research conducted by Forest Trends' Ecosystem Marketplace from 2008 to 2016 on biodiversity (i.e., imperiled species/habitats; wetlands and streams), carbon, and water markets. Additional biodiversity data were obtained from the Regulatory In-lieu Fee and Bank Information Tracking System (RIBITS) database in 2015. Points represent the centroids (i.e., center points) of market coverage areas, project footprints, or project primary impact areas in which ecosystem service markets or projects operate. National-level markets are an exception to this norm with points representing administrative headquarters locations. Attribute data include information regarding the methodology, design, and development of biodiversity, carbon, and water markets and projects. This dataset was produced by Forest Trends' Ecosystem Marketplace for EnviroAtlas in order to support public access to and use of information related to environmental markets. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) o

  6. Increasing Drought Sensitivity and Decline of Atlas Cedar (Cedrus atlantica in the Moroccan Middle Atlas Forests

    Directory of Open Access Journals (Sweden)

    Jesús Julio Camarero

    2011-09-01

    Full Text Available An understanding of the interactions between climate change and forest structure on tree growth are needed for decision making in forest conservation and management. In this paper, we investigated the relative contribution of tree features and stand structure on Atlas cedar (Cedrus atlantica radial growth in forests that have experienced heavy grazing and logging in the past. Dendrochronological methods were applied to quantify patterns in basal-area increment and drought sensitivity of Atlas cedar in the Middle Atlas, northern Morocco. We estimated the tree-to-tree competition intensity and quantified the structure in Atlas cedar stands with contrasting tree density, age, and decline symptoms. The relative contribution of tree age and size and stand structure to Atlas cedar growth decline was estimated by variance partitioning using partial-redundancy analyses. Recurrent drought events and temperature increases have been identified from local climate records since the 1970s. We detected consistent growth declines and increased drought sensitivity in Atlas cedar across all sites since the early 1980s. Specifically, we determined that previous growth rates and tree age were the strongest tree features, while Quercus rotundifolia basal area was the strongest stand structure measure related to Atlas cedar decline. As a result, we suggest that Atlas cedar forests that have experienced severe drought in combination with grazing and logging may be in the process of shifting dominance toward more drought-tolerant species such as Q. rotundifolia.

  7. Integrating Networking into ATLAS

    CERN Document Server

    Mc Kee, Shawn Patrick; The ATLAS collaboration

    2018-01-01

    Networking is foundational to the ATLAS distributed infrastructure and there are many ongoing activities related to networking both within and outside of ATLAS. We will report on the progress in a number of areas exploring ATLAS's use of networking and our ability to monitor the network, analyze metrics from the network, and tune and optimize application and end-host parameters to make the most effective use of the network. Specific topics will include work on Open vSwitch for production systems, network analytics, FTS testing and tuning, and network problem alerting and alarming.

  8. ATLAS End-cap Part II

    CERN Multimedia

    2007-01-01

    The epic journey of the ATLAS magnets is drawing to an end. On Thursday 12 July, the second end-cap of the ATLAS toroid magnet was lowered into the cavern of the experiment with the same degree of precision as the first (see Bulletin No. 26/2007). This spectacular descent of the 240-tonne component, is one of the last transport to be completed for ATLAS.

  9. ATLAS experiment : mapping the secrets of the universe

    CERN Multimedia

    ATLAS Outreach

    2010-01-01

    This 4 page color brochure describes ATLAS and the LHC, the ATLAS inner detector, calorimeters, muon spectrometer, magnet system, a short definition of the terms "particles," "dark matter," "mass," "antimatter." It also explains the ATLAS collaboration and provides the ATLAS website address with some images of the detector and the ATLAS collaboration at work.

  10. Evaluation of K-ras and p53 expression in pancreatic adenocarcinoma using the cancer genome atlas.

    Directory of Open Access Journals (Sweden)

    Liming Lu

    Full Text Available Genetic alterations in K-ras and p53 are thought to be critical in pancreatic cancer development and progression. However, K-ras and p53 expression in pancreatic adenocarcinoma have not been systematically examined in The Cancer Genome Atlas (TCGA Data Portal. Information regarding K-ras and p53 alterations, mRNA expression data, and protein/protein phosphorylation abundance was retrieved from The Cancer Genome Atlas (TCGA databases, and analyses were performed by the cBioPortal for Cancer Genomics. The mutual exclusivity analysis showed that events in K-ras and p53 were likely to co-occur in pancreatic adenocarcinoma (Log odds ratio = 1.599, P = 0.006. The graphical summary of the mutations showed that there were hotspots for protein activation. In the network analysis, no solid association between K-ras and p53 was observed in pancreatic adenocarcinoma. In the survival analysis, neither K-ras nor p53 were associated with both survival events. As in the data mining study in the TCGA databases, our study provides a new perspective to understand the genetic features of K-ras and p53 in pancreatic adenocarcinoma.

  11. Components for the data acquisition system of the ATLAS testbeams 1996

    International Nuclear Information System (INIS)

    Caprini, M; Niculescu, Michaela

    1997-01-01

    ATLAS is one of the experiments developed at CERN for the Large Hadron Collider. For the sub-detector testbeams a data acquisition system (DAQ) was designed. The Bucharest group is a member of the ATLAS DAQ collaboration and contributed to the development of some components of the testbeam DAQ: -read-out modules for standalone and combined test-beams; - readout module for the liquid argon detector; - run control graphical user interface; - central data recording system. The readout module is able to acquire data event by event from the detector electronics and is based on a Finite State Machine (FSM) incorporating a general scheme for the calibration procedure. The FSM allows detectors to take data either in standalone mode, with local control and recording, or in combined mode together with other sub-detectors, with a very easy switching between the two different configurations. The readout module for the liquid argon detector is written as a data flow element which takes raw data and creates a formatted event. At initialization stage the run and detector parameters are read from the Run Control Parameters database. Then the state changes are driven by three interrupt signals (Start of Burst, Trigger, End of Burst) generated by hardware. In calibration mode at each trigger the event is built (calibration data are taken outside the beam) and then the conditions for the next calibration trigger are prepared (DAQ values, delays, pulsers). The graphical user interface is designed to be used for the control of the data acquisition system. The interface provides a global experiment panel for the activation and navigation in all the command and display panels. The user can start, stop or change the state of the system, obtain the most important information about the whole system states and activate other service programs in order to select parameters, databases and to display information about the evolution of the system. Central data recording system lays on the client

  12. ATLAS Visitors Centre

    CERN Multimedia

    claudia Marcelloni

    2009-01-01

    ATLAS Visitors Centre has opened its shiny new doors to the public. Officially launched on Monday February 23rd, 2009, the permanent exhibition at Point 1 was conceived as a tour resource for ATLAS guides, and as a way to preserve the public’s opportunity to get a close-up look at the experiment in action when the cavern is sealed.

  13. ATLAS rewards industry

    CERN Document Server

    Maximilien Brice

    2006-01-01

    For contributing vital pieces to the ATLAS puzzle, three industries were recognized on Friday 5 May during a supplier awards ceremony. After a welcome and overview of the ATLAS experiment by spokesperson Peter Jenni, CERN Secretary-General Maximilian Metzger stressed the importance of industry to CERN's scientific goals. Picture 30 : representatives of the three award-wining companies after the ceremony

  14. The ATLAS Level-1 Trigger System with 13TeV nominal LHC collisions

    CERN Document Server

    Helary, Louis; The ATLAS collaboration

    2017-01-01

    The Level-1 (L1) Trigger system of the ATLAS experiment at CERN's Large Hadron Collider (LHC) plays a key role in the ATLAS detector data-taking. It is a hardware system that selects in real time events containing physics-motivated signatures. Selection is purely based on calorimetry energy depositions and hits in the muon chambers consistent with muon candidates. The L1 Trigger system has been upgraded to cope with the more challenging run-II LHC beam conditions, including increased centre-of-mass energy, increased instantaneous luminosity and higher levels of pileup. This talk summarises the improvements, commissioning and performance of the L1 ATLAS Trigger for the LHC run-II data period. The acceptance of muon triggers has been improved by increasing the hermiticity of the muon spectrometer. New strategies to obtain a better muon trigger signal purity were designed for certain geometrically difficult transition regions by using the ATLAS hadronic calorimeter. Algorithms to reduce noise spikes in muon trig...

  15. The ATLAS Experiment Laboratory - Overview

    International Nuclear Information System (INIS)

    Malecki, P.

    1999-01-01

    Full text: ATLAS Experiment Laboratory has been created by physicists and engineers preparing a research programme and detector for the LHC collider. This group is greatly supported by members of other Departments taking also part (often full time) in the ATLAS project. These are: J. Blocki, J. Godlewski, Z. Hajduk, P. Kapusta, B. Kisielewski, W. Ostrowicz, E. Richter-Was, and M. Turala. Our ATLAS Laboratory realizes its programme in very close collaboration with the Faculty of Physics and Nuclear Technology of the University of Mining and Metallurgy. ATLAS, A Toroidal LHC ApparatuS Collaboration groups about 1700 experimentalists from about 150 research institutes. This apparatus, a huge system of many detectors, which are technologically very advanced, is going to be ready by 2005. With the start of the 2 x 7 TeV LHC collider ATLAS and CMS (the sister experiment at LHC) will begin their fascinating research programme at beam energies and intensities which have never been exploited. (author)

  16. ATLAS Award for Difficult Task

    CERN Multimedia

    2004-01-01

    Two Russian companies were honoured with an ATLAS Award, for supply of the ATLAS Inner Detector barrel support structure elements, last week. On 23 March the Russian company ORPE Technologiya and its subcontractor, RSP Khrunitchev, were jointly presented with an ATLAS Supplier Award. Since 1998, ORPE Technologiya has been actively involved in the development of the carbon-fibre reinforced plastic elements of the ATLAS Inner Detector barrel support structure. After three years of joint research and development, CERN and ORPE Technologiya launched the manufacturing contract. It had a tight delivery schedule and very demanding specifications in terms of mechanical tolerance and stability. The contract was successfully completed with the arrival of the last element of the structure at CERN on 8 January 2004. The delivery of this key component of the Inner Detector deserves an ATLAS Award given the difficulty of manufacturing the end-frames, which very few companies in the world would have been able to do at an ...

  17. ATLAS & Google - The Data Ocean Project

    CERN Document Server

    Lassnig, Mario; The ATLAS collaboration

    2018-01-01

    With the LHC High Luminosity upgrade the workload and data management systems are facing new major challenges. To address those challenges ATLAS and Google agreed to cooperate on a project to connect Google Cloud Storage and Compute Engine to the ATLAS computing environment. The idea is to allow ATLAS to explore the use of different computing models, to allow ATLAS user analysis to benefit from the Google infrastructure, and to give Google real science use cases to improve their cloud platform. Making the output of a distributed analysis from the grid quickly available to the analyst is a difficult problem. Redirecting the analysis output to Google Cloud Storage can provide an alternative, faster solution for the analyst. First, Google's Cloud Storage will be connected to the ATLAS Data Management System Rucio. The second part aims to let jobs run on Google Compute Engine, accessing data from either ATLAS storage or Google Cloud Storage. The third part involves Google implementing a global redirection between...

  18. The ATLAS hadronic tau trigger

    International Nuclear Information System (INIS)

    Shamim, Mansoora

    2012-01-01

    The extensive tau physics programs of the ATLAS experiment relies heavily on trigger to select hadronic decays of tau lepton. Such a trigger is implemented in ATLAS to efficiently collect signal events, while keeping the rate of multi-jet background within the allowed bandwidth. This contribution summarizes the performance of the ATLAS hadronic tau trigger system during 2011 data taking period and improvements implemented for the 2012 data collection.

  19. ATLAS OF EUROPEAN VALUES

    NARCIS (Netherlands)

    M Ed Uwe Krause

    2008-01-01

    Uwe Krause: Atlas of Eurpean Values De Atlas of European Values is een samenwerkingsproject met bijbehorende website van de Universiteit van Tilburg en Fontys Lerarenopleiding in Tilburg, waarbij de wetenschappelijke data van de European Values Study (EVS) voor het onderwijs toegankelijk worden

  20. ATLAS brochure (Italian version)

    CERN Multimedia

    Lefevre, C

    2010-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  1. ATLAS brochure (French version)

    CERN Multimedia

    Lefevre, C

    2012-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  2. ATLAS brochure (German version)

    CERN Multimedia

    Lefevre, C

    2012-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  3. ATLAS brochure (Danish version)

    CERN Multimedia

    Lefevre, C

    2010-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  4. The Hatfield SCT lunar atlas photographic atlas for Meade, Celestron, and other SCT telescopes

    CERN Document Server

    2014-01-01

    In a major publishing event for lunar observers, the justly famous Hatfield atlas is updated in even more usable form. This version of Hatfield’s classic atlas solves the problem of mirror images, making identification of left-right reversed imaged lunar features both quick and easy. SCT and Maksutov telescopes – which of course include the best-selling models from Meade and Celestron – reverse the visual image left to right. Thus it is extremely difficult to identify lunar features at the eyepiece of one of the instruments using a conventional Moon atlas, as the human brain does not cope well when trying to compare the real thing with a map that is a mirror image of it. Now this issue has at last been solved.   In this atlas the Moon’s surface is shown at various sun angles, and inset keys show the effects of optical librations. Smaller non-mirrored reference images are also included to make it simple to compare the mirrored SCT plates and maps with those that appear in other atlases. This edition s...

  5. Last piece of the puzzle for ATLAS

    CERN Multimedia

    Clare Ryan

    At around 15.40 on Friday 29th February the ATLAS collaboration cracked open the champagne as the second of the small wheels was lowered into the cavern. Each of ATLAS' small wheels are 9.3 metres in diameter and weigh 100 tonnes including the massive shielding elements. They are the final parts of ATLAS' muon spectrometer. The first piece of ATLAS was installed in 2003 and since then many detector elements have journeyed down the 100 metre shaft into the ATLAS underground cavern. This last piece completes this gigantic puzzle.

  6. NATIONAL ATLAS OF THE ARCTIC

    Directory of Open Access Journals (Sweden)

    Nikolay S. Kasimov

    2018-01-01

    Full Text Available The National Atlas of the Arctic is a set of spatio-temporal information about the geographic, ecological, economic, historical-ethnographic, cultural, and social features of theArcticcompiled as a cartographic model of the territory. The Atlas is intended for use in a wide range of scientific, management, economic, defense, educational, and public activities. The state policy of theRussian Federationin the Arctic for the period until 2020 and beyond, states that the Arctic is of strategic importance forRussiain the 21st century. A detailed description of all sections of the Atlas is given. The Atlas can be used as an information-reference and educational resource or as a gift edition.

  7. Tracking in Dense Environments for the HL-LHC ATLAS Detector

    CERN Document Server

    Cormier, Felix; The ATLAS collaboration

    2018-01-01

    Tracking in dense environments, such as in the cores of high-energy jets, will be key for new physics searches as well as measurements of the Standard Model at the High Luminosity LHC (HL-LHC). The HL-LHC will operate in challenging conditions with large radiation doses and high pile-up (up to $\\mu=200$). The current tracking detector will be replaced with a new all-silicon Inner Tracker for the Phase II upgrade of the ATLAS detector. In this talk, characterization of the HL-LHC tracker performance for collimated, high-density charged particles arising from high-momentum decays is presented. In such decays the charged-particle separations are of the order of the tracking detector granularity, leading to challenging reconstruction. The ability of the HL-LHC ATLAS tracker to reconstruct the tracks in such dense environments is discussed and compared to ATLAS Run-2 performance for a variety of relevant physics processes.

  8. Test of ATLAS RPCs Front-End electronics

    International Nuclear Information System (INIS)

    Aielli, G.; Camarri, P.; Cardarelli, R.; Di Ciaccio, A.; Di Stante, L.; Liberti, B.; Paoloni, A.; Pastori, E.; Santonico, R.

    2003-01-01

    The Front-End Electronics performing the ATLAS RPCs readout is a full custom 8 channels GaAs circuit, which integrates in a single die both the analog and digital signal processing. The die is bonded on the Front-End board which is completely closed inside the detector Faraday cage. About 50 000 FE boards are foreseen for the experiment. The complete functionality of the FE boards will be certificated before the detector assembly. We describe here the systematic test devoted to check the dynamic functionality of each single channel and the selection criteria applied. It measures and registers all relevant electronics parameters to build up a complete database for the experiment. The statistical results from more than 1100 channels are presented

  9. EnviroAtlas Proximity to Parks Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This EnviroAtlas dataset shows...

  10. ATLAS IV in situ heating test in Boom Clay

    International Nuclear Information System (INIS)

    Chen, Guangjing; Li, Xiangling; Verstricht, Jan; Sillen, Xavier

    2012-01-01

    Document available in extended abstract form only. The small scale in-situ ATLAS (Admissible Thermal Loading for Argillaceous Storage) tests are performed to assess the hydro-mechanical effects of a thermal transient on the host Boom clay at the HADES underground research facility in Mol, Belgium. The initial test set-up, consisting of a heater borehole and two observation boreholes, was installed in 1991-1992. The first test (later named 'ATLAS I') was then performed from July 1993 to June 1996; during this time, the heater dissipated a constant power of 900 W. During the second phase ('ATLAS II'), the heating power was doubled (1800 W) and maintained constant from June 1996 to May 1997. This was followed by shutdown and natural cooling starting from June 1997 on. To broaden the THM characterization of the Boom clay at a larger scale and at different temperature levels, the test set-up was extended in 2006 by drilling two additional instrumented boreholes (AT97E and AT98E). The heater was switched on again from April 2007 to April 2008 with a stepwise power increase, followed by an instantaneous shutdown. This phase is called 'ATLAS III'. The above tests have provided a large set of good quality and well documented data on temperature, pore water pressure and total stress; these data allowed to make several interesting observations regarding the thermal anisotropy and THM coupling in the Boom clay. The straightforward geometry and well defined boundary conditions of the tests facilitate the comparison between measurement and numerical modeling studies. Based on the three dimensional coupled THM modeling of the ATLAS III test, the good agreement between measurement and numerical modeling of temperature and pore water pressure yields a set of THM parameters and confirms the thermo-mechanical anisotropy of the Boom clay. To get a better insight in the anisotropic THM behavior of the Boom clay, a new upward instrumented borehole was drilled above the ATLAS heater at

  11. Silicon Strip Detectors for the ATLAS sLHC Upgrade

    CERN Document Server

    Miñano, M; The ATLAS collaboration

    2011-01-01

    While the Large Hadron Collider (LHC) at CERN is continuing to deliver an ever-increasing luminosity to the experiments, plans for an upgraded machine called Super-LHC (sLHC) are progressing. The upgrade is foreseen to increase the LHC design luminosity by a factor ten. The ATLAS experiment will need to build a new tracker for sLHC operation, which needs to be suited to the harsh sLHC conditions in terms of particle rates. In order to cope with the increase in pile-up backgrounds at the higher luminosity, an all silicon detector is being designed. To successfully face the increased radiation dose, a new generation of extremely radiation hard silicon detectors is being designed. The left part of figure 1 shows the simulated layout for the ATLAS tracker upgrade to be installed in the volume taken up by the current ATLAS pixel, strip and transition radiation detectors. Silicon sensors with sufficient radiation hardness are the subject of an international R&D programme, working on pixel and strip sensors. The...

  12. Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.

    Science.gov (United States)

    Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto

    2016-04-01

    MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.

  13. Tile-in-ONE An integrated framework for the data quality assessment and database management for the ATLAS Tile Calorimeter

    International Nuclear Information System (INIS)

    Cunha, R; Sivolella, A; Ferreira, F; Maidantchik, C; Solans, C

    2014-01-01

    In order to ensure the proper operation of the ATLAS Tile Calorimeter and assess the quality of data, many tasks are performed by means of several tools which have been developed independently. The features are displayed into standard dashboards, dedicated to each working group, covering different areas, such as Data Quality and Calibration.

  14. ATLAS

    Data.gov (United States)

    Federal Laboratory Consortium — ATLAS is a particle physics experiment at the Large Hadron Collider at CERN, the European Organization for Nuclear Research. Scientists from Brookhaven have played...

  15. The Cerefy registered clinical brain atlas on CD-ROM. Based on the classic Talairach-Tournoux and Schaltenbrand-Wahren brain atlases. 2. ed.

    International Nuclear Information System (INIS)

    Nowinski, W.L.; Thirunavuukarasuu, A.

    2001-01-01

    This remarkable CD-ROM provides enhanced and extended versions of three world-famous Thieme atlases, (Schaltenbrand and Wahren's Atlas for Stereotaxy of the Human Brain, Talairach and Tournoux's Co-Planar Stereotaxis Atlas of the Human Brain and Referentially Oriented Cerebral MRI Anatomy). It contains the electronic atlases as well as an easy navigation system to facilitate searching for and displaying more than 525 anatomical structures. Revolutionizing the field of brain anatomy, the authors have segmented, labeled, and cross referenced all the information contained in the books, and created contours for all three atlases. The Cerefy registered Clinical Brain Atlas now allows you to electronically navigate these atlases simultaneously on axial, coronal, and sagittal planes, and enjoy the ability to: 1. Access 210 high-quality, fully segmented, and labeled atlas images with corresponding contours, 2. Display and manipulate spatially co-registered atlases, 3. Dynamically label images with structure names and descriptions, and then highlight selected structures in the atlas image, 4. Image zoom in five different levels, mensurate, search, set triplanar, get coordinates, save, and print, 5. Access on-line help, glossary, and supportive atlas materials. (orig.)

  16. ATLAS brochure (Norwegian version)

    CERN Multimedia

    Lefevre, C

    2009-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter. Français

  17. A Slice of ATLAS

    CERN Document Server

    2004-01-01

    An entire section of the ATLAS detector is being assembled at Prévessin. Since May the components have been tested using a beam from the SPS, giving the ATLAS team valuable experience of operating the detector as well as an opportunity to debug the system.

  18. The Latest from ATLAS

    CERN Multimedia

    2009-01-01

    Since November 2008, ATLAS has undertaken detailed maintenance, consolidation and repair work on the detector (see Bulletin of 20 July 2009). Today, the fraction of the detector that is operational has increased compared to last year: less than 1% of dead channels for most of the sub-systems. "We are going to start taking data this year with a detector which is even more efficient than it was last year," agrees ATLAS Spokesperson, Fabiola Gianotti. By mid-September the detector was fully closed again, and the cavern sealed. The magnet system has been operated at nominal current for extensive periods over recent months. Once the cavern was sealed, ATLAS began two weeks of combined running. Right now, subsystems are joining the run incrementally until the point where the whole detector is integrated and running as one. In the words of ATLAS Technical Coordinator, Marzio Nessi: "Now we really start physics." In parallel, the analysis ...

  19. A thermosiphon for ATLAS

    CERN Multimedia

    Rosaria Marraffino

    2013-01-01

    A new thermosiphon cooling system, designed for the ATLAS silicon detectors by CERN’s EN-CV team in collaboration with the experiment, will replace the current system in the next LHC run in 2015. Using the basic properties of density difference and making gravity do the hard work, the thermosiphon promises to be a very reliable solution that will ensure the long-term stability of the whole system.   Former compressor-based cooling system of the ATLAS inner detectors. The system is currently being replaced by the innovative thermosiphon. (Photo courtesy of Olivier Crespo-Lopez). Reliability is the major issue for the present cooling system of the ATLAS silicon detectors. The system was designed 13 years ago using a compressor-based cooling cycle. “The current cooling system uses oil-free compressors to avoid fluid pollution in the delicate parts of the silicon detectors,” says Michele Battistin, EN-CV-PJ section leader and project leader of the ATLAS thermosiphon....

  20. The High-Resolution IRAS Galaxy Atlas

    Science.gov (United States)

    Cao, Yu; Terebey, Susan; Prince, Thomas A.; Beichman, Charles A.; Oliversen, R. (Technical Monitor)

    1997-01-01

    An atlas of the Galactic plane (-4.7 deg is less than b is less than 4.7 deg), along with the molecular clouds in Orion, rho Oph, and Taurus-Auriga, has been produced at 60 and 100 microns from IRAS data. The atlas consists of resolution-enhanced co-added images with 1 min - 2 min resolution and co-added images at the native IRAS resolution. The IRAS Galaxy Atlas, together with the Dominion Radio Astrophysical Observatory H(sub I) line/21 cm continuum and FCRAO CO (1-0) Galactic plane surveys, which both have similar (approx. 1 min) resolution to the IRAS atlas, provides a powerful tool for studying the interstellar medium, star formation, and large-scale structure in our Galaxy. This paper documents the production and characteristics of the atlas.

  1. A CAD system and quality assurance protocol for bone age assessment utilizing digital hand atlas

    Science.gov (United States)

    Gertych, Arakadiusz; Zhang, Aifeng; Ferrara, Benjamin; Liu, Brent J.

    2007-03-01

    Determination of bone age assessment (BAA) in pediatric radiology is a task based on detailed analysis of patient's left hand X-ray. The current standard utilized in clinical practice relies on a subjective comparison of the hand with patterns in the book atlas. The computerized approach to BAA (CBAA) utilizes automatic analysis of the regions of interest in the hand image. This procedure is followed by extraction of quantitative features sensitive to skeletal development that are further converted to a bone age value utilizing knowledge from the digital hand atlas (DHA). This also allows providing BAA results resembling current clinical approach. All developed methodologies have been combined into one CAD module with a graphical user interface (GUI). CBAA can also improve the statistical and analytical accuracy based on a clinical work-flow analysis. For this purpose a quality assurance protocol (QAP) has been developed. Implementation of the QAP helped to make the CAD more robust and find images that cannot meet conditions required by DHA standards. Moreover, the entire CAD-DHA system may gain further benefits if clinical acquisition protocol is modified. The goal of this study is to present the performance improvement of the overall CAD-DHA system with QAP and the comparison of the CAD results with chronological age of 1390 normal subjects from the DHA. The CAD workstation can process images from local image database or from a PACS server.

  2. ATLAS Fact Sheet : To raise awareness of the ATLAS detector and collaboration on the LHC

    CERN Multimedia

    ATLAS Outreach

    2010-01-01

    Facts on the Detector, Calorimeters, Muon System, Inner Detector, Pixel Detector, Semiconductor Tracker, Transition Radiation Tracker,, Surface hall, Cavern, Detector, Magnet system, Solenoid, Toroid, Event rates, Physics processes, Supersymmetric particles, Comparing LHC with Cosmic rays, Heavy ion collisions, Trigger and Data Acquisition TDAQ, Computing, the LHC and the ATLAS collaboration. This fact sheet also contains images of ATLAS and the collaboration as well as a short list of videos on ATLAS available for viewing.

  3. The ATLAS distributed analysis system

    OpenAIRE

    Legger, F.

    2014-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During...

  4. The importance of having an appropriate relational data segmentation in ATLAS

    International Nuclear Information System (INIS)

    Dimitrov, G

    2015-01-01

    In this paper we describe specific technical solutions put in place in various database applications of the ATLAS experiment at LHC where we make use of several partitioning techniques available in Oracle 11g. With the broadly used range partitioning and its option of automatic interval partitioning we add our own logic in PLSQL procedures and scheduler jobs to sustain data sliding windows in order to enforce various data retention policies. We also make use of the new Oracle 11g reference partitioning in the Nightly Build System to achieve uniform data segmentation. However the most challenging issue was to segment the data of the new ATLAS Distributed Data Management system (Rucio), which resulted in tens of thousands list type partitions and sub-partitions. Partition and sub-partition management, index strategy, statistics gathering and queries execution plan stability are important factors when choosing an appropriate physical model for the application data management. The so-far accumulated knowledge and analysis on the new Oracle 12c version features that could be beneficial will be shared with the audience. (paper)

  5. Danish heat atlas as a support tool for energy system models

    DEFF Research Database (Denmark)

    Petrovic, Stefan; Karlsson, Kenneth Bernard

    2014-01-01

    In the past four decades following the global oil crisis in 1973, Denmark has implemented remarkable changes in its energy sector, mainly due to the energy conservation measures on the demand side and the energy efficiency improvements on the supply side. Nowadays, the capital intensive infrastru......In the past four decades following the global oil crisis in 1973, Denmark has implemented remarkable changes in its energy sector, mainly due to the energy conservation measures on the demand side and the energy efficiency improvements on the supply side. Nowadays, the capital intensive...... infrastructure investments, such as the expansion of district heating networks and the introduction of significant heat saving measures require highly detailed decision-support tool. A Danish heat atlas provides highly detailed database with extensive information about more than 2.5 million buildings in Denmark...... society after 2050. The present paper shows how a Danish heat atlas can be used for providing inputs to energy system models, especially related to the analysis of heat saving measures within building stock and expansion of district heating networks. As a result, marginal cost curves are created...

  6. The importance of having an appropriate relational data segmentation in ATLAS

    Science.gov (United States)

    Dimitrov, G.

    2015-12-01

    In this paper we describe specific technical solutions put in place in various database applications of the ATLAS experiment at LHC where we make use of several partitioning techniques available in Oracle 11g. With the broadly used range partitioning and its option of automatic interval partitioning we add our own logic in PLSQL procedures and scheduler jobs to sustain data sliding windows in order to enforce various data retention policies. We also make use of the new Oracle 11g reference partitioning in the Nightly Build System to achieve uniform data segmentation. However the most challenging issue was to segment the data of the new ATLAS Distributed Data Management system (Rucio), which resulted in tens of thousands list type partitions and sub-partitions. Partition and sub-partition management, index strategy, statistics gathering and queries execution plan stability are important factors when choosing an appropriate physical model for the application data management. The so-far accumulated knowledge and analysis on the new Oracle 12c version features that could be beneficial will be shared with the audience.

  7. ATLAS brochure (Catalan version)

    CERN Multimedia

    Lefevre, C

    2008-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  8. ATLAS Brochure (french version)

    CERN Multimedia

    Marcastel, F

    2007-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  9. ATLAS brochure (Polish version)

    CERN Multimedia

    Lefevre, C

    2007-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  10. ATLAS Brochure (german version)

    CERN Multimedia

    Marcastel, F

    2007-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  11. ATLAS Brochure (english version)

    CERN Multimedia

    Marcastel, F

    2007-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  12. ATLAS Brochure (English version)

    CERN Multimedia

    Lefevre, Christiane

    2011-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  13. ATLAS brochure (Spanish version)

    CERN Multimedia

    Lefevre, C

    2008-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  14. Report to users of ATLAS, January 1998

    International Nuclear Information System (INIS)

    Ahmad, I.; Hofman, D.

    1998-01-01

    This report is aimed at informing users about the operating schedule, user policies, and recent changes in research capabilities. It covers the following subjects: (1) status of the Argonne Tandem-Linac Accelerator System (ATLAS) accelerator; (2) the move of Gammasphere from LBNL to ANL; (3) commissioning of the CPT mass spectrometer at ATLAS; (4) highlights of recent research at ATLAS; (5) Program Advisory Committee; and (6) ATLAS User Group Executive Committee

  15. First ATLAS Events Recorded Underground

    CERN Multimedia

    Teuscher, R

    As reported in the CERN Bulletin, Issue No.30-31, 25 July 2005 The ATLAS barrel Tile calorimeter has recorded its first events underground using a cosmic ray trigger, as part of the detector commissioning programme. This is not a simulation! A cosmic ray muon recorded by the barrel Tile calorimeter of ATLAS on 21 June 2005 at 18:30. The calorimeter has three layers and a pointing geometry. The light trapezoids represent the energy deposited in the tiles of the calorimeter depicted as a thick disk. On the evening of June 21, the ATLAS detector, now being installed in the underground experimental hall UX15, reached an important psychological milestone: the barrel Tile calorimeter recorded the first cosmic ray events in the underground cavern. An estimated million cosmic muons enter the ATLAS cavern every 3 minutes, and the ATLAS team decided to make good use of some of them for the commissioning of the detector. Although only 8 of the 128 calorimeter slices ('superdrawers') were included in the trigg...

  16. ATLAS construction status

    International Nuclear Information System (INIS)

    Jenni, P.

    2006-01-01

    The ATLAS detector is being constructed at the LHC, in view of a data-taking startup in 2007. This report concentrates on the progress and the technical challenges of the detector construction, and summarizes the status of the work as of August 2004. The project is on track to allow the highly motivated ATLAS Collaboration to enter into a new exploratory domain of high-energy physics in 2007. (author)

  17. ATLAS cloud R and D

    International Nuclear Information System (INIS)

    Panitkin, Sergey; Bejar, Jose Caballero; Hover, John; Zaytsev, Alexander; Megino, Fernando Barreiro; Girolamo, Alessandro Di; Kucharczyk, Katarzyna; Llamas, Ramon Medrano; Benjamin, Doug; Gable, Ian; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Hendrix, Val; Love, Peter; Ohman, Henrik; Walker, Rodney

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R and D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R and D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R and D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R and D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  18. The ATLAS Pixel Detector

    CERN Document Server

    Huegging, Fabian

    2006-06-26

    The contruction of the ATLAS Pixel Detector which is the innermost layer of the ATLAS tracking system is prgressing well. Because the pixel detector will contribute significantly to the ATLAS track and vertex reconstruction. The detector consists of identical sensor-chip-hybrid modules, arranged in three barrels in the centre and three disks on either side for the forward region. The position of the detector near the interaction point requires excellent radiation hardness, mechanical and thermal robustness, good long-term stability for all parts, combined with a low material budget. The final detector layout, new results from production modules and the status of assembly are presented.

  19. Thermal Testing and Model Correlation for Advanced Topographic Laser Altimeter Instrument (ATLAS)

    Science.gov (United States)

    Patel, Deepak

    2016-01-01

    The Advanced Topographic Laser Altimeter System (ATLAS) part of the Ice Cloud and Land Elevation Satellite 2 (ICESat-2) is an upcoming Earth Science mission focusing on the effects of climate change. The flight instrument passed all environmental testing at GSFC (Goddard Space Flight Center) and is now ready to be shipped to the spacecraft vendor for integration and testing. This topic covers the analysis leading up to the test setup for ATLAS thermal testing as well as model correlation to flight predictions. Test setup analysis section will include areas where ATLAS could not meet flight like conditions and what were the limitations. Model correlation section will walk through changes that had to be made to the thermal model in order to match test results. The correlated model will then be integrated with spacecraft model for on-orbit predictions.

  20. CAMAC-based intelligent subsystem for ATLAS example application: cryogenic monitoring and control

    International Nuclear Information System (INIS)

    Pardo, R.; Kawarasaki, Y.; Wasniewski, K.

    1985-01-01

    A subunit of the CAMAC accelerator control system of ATLAS for monitoring and, eventually, controlling the cryogenic refrigeration and distribution facility is under development. This development is the first application of a philosophy of distributed intelligence which will be applied throughout the ATLAS control system. The control concept is that of an intelligent subunit of the existing ATLAS CAMAC control highway. A single board computer resides in an auxiliary crate controller which allows access to all devices within the crate. The local SBC can communicate to the host over the CAMAC highway via a protocol involving the use of memory in the SBC which can be accessed from the host in a DMA mode. This provides a mechanism for global communications, such as for alarm conditions, as well as allowing the cryogenic system to respond to the demands of the accelerator system