WorldWideScience

Sample records for monitoring program atlas

  1. Event filter monitoring with the ATLAS tile calorimeter

    CERN Document Server

    Fiorini, L

    2008-01-01

    The ATLAS Tile Calorimeter detector is presently involved in an intense phase of subsystems integration and commissioning with muons of cosmic origin. Various monitoring programs have been developed at different levels of the data flow to tune the set-up of the detector running conditions and to provide a fast and reliable assessment of the data quality already during data taking. This paper focuses on the monitoring system integrated in the highest level of the ATLAS trigger system, the Event Filter, and its deployment during the Tile Calorimeter commissioning with cosmic ray muons. The key feature of Event Filter monitoring is the capability of performing detector and data quality control on complete physics events at the trigger level, hence before events are stored on disk. In ATLAS' online data flow, this is the only monitoring system capable of giving a comprehensive event quality feedback.

  2. ATLAS job monitoring in the Dashboard Framework

    CERN Document Server

    Sargsyan, L; The ATLAS collaboration; Campana, S; Karavakis, E; Kokoszkiewicz, L; Saiz, P; Schovancova, J; Tuckett, D

    2012-01-01

    Monitoring of the large-scale data processing of the ATLAS experiment includes monitoring of production and user analysis jobs. The Experiment Dashboard provides a common job monitoring solution, which is shared by ATLAS and CMS experiments. This includes an accounting portal as well as real-time monitoring. Dashboard job monitoring for ATLAS combines information from PanDA job processing database, Production system database and monitoring information from jobs submitted through GANGA to Workload Management System (WMS) or local batch systems. Usage of Dashboard-based job monitoring applications will decrease load on the PanDA database and overcome scale limitations in PanDA monitoring caused by the short job rotation cycle in the PanDA database. Aggregation of the task/job metrics from different sources provides complete view of job processing activity in ATLAS scope.

  3. ATLAS job monitoring in the Dashboard Framework

    International Nuclear Information System (INIS)

    Andreeva, J; Campana, S; Karavakis, E; Kokoszkiewicz, L; Saiz, P; Tuckett, D; Sargsyan, L; Schovancova, J

    2012-01-01

    Monitoring of the large-scale data processing of the ATLAS experiment includes monitoring of production and user analysis jobs. The Experiment Dashboard provides a common job monitoring solution, which is shared by ATLAS and CMS experiments. This includes an accounting portal as well as real-time monitoring. Dashboard job monitoring for ATLAS combines information from the PanDA job processing database, Production system database and monitoring information from jobs submitted through GANGA to Workload Management System (WMS) or local batch systems. Usage of Dashboard-based job monitoring applications will decrease load on the PanDA database and overcome scale limitations in PanDA monitoring caused by the short job rotation cycle in the PanDA database. Aggregation of the task/job metrics from different sources provides complete view of job processing activity in ATLAS scope.

  4. Trigger Menu-aware Monitoring for the ATLAS experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00441925; The ATLAS collaboration

    2017-01-01

    Changes in the trigger menu, the online algorithmic event-selection of the ATLAS experiment at the LHC, are followed by adjustments to the ATLAS trigger monitoring systems. During Run 1, and so far in Run 2, ATLAS has deployed monitoring updates with the installation of new software releases at Tier-0, the first level of the ATLAS computing grid. Having to wait for a new software release to be installed at Tier-0, in order to update ATLAS offline trigger monitoring configurations, results in a lag with respect to the modification of the trigger menu. We present the design and implementation of a `trigger menu-aware' monitoring system that aims to simplify the ATLAS operational workflows by allowing monitoring configuration changes to be made at the Tier-0 site by utilising an Oracle SQL database.

  5. Monitoring radiation damage in the ATLAS pixel detector

    CERN Document Server

    Schorlemmer, André Lukas; Quadt, Arnulf; Große-Knetter, Jörn; Rembser, Christoph; Di Girolamo, Beniamino

    2014-11-05

    Radiation hardness is one of the most important features of the ATLAS pixel detector in order to ensure a good performance and a long lifetime. Monitoring of radiation damage is crucial in order to assess and predict the expected performance of the detector. Key values for the assessment of radiation damage in silicon, such as the depletion voltage and depletion depth in the sensors, are measured on a regular basis during operations. This thesis summarises the monitoring program that is conducted in order to assess the impact of radiation damage and compares it to model predictions. In addition, the physics performance of the ATLAS detector highly depends on the amount of disabled modules in the ATLAS pixel detector. A worrying amount of module failures was observed during run I. Thus it was decided to recover repairable modules during the long shutdown (LS1) by extracting the pixel detector. The impact of the module repairs and module failures on the detector performance is analysed in this thesis.

  6. Trigger Menu-aware Monitoring for the ATLAS experiment

    CERN Document Server

    Hoad, Xanthe; The ATLAS collaboration

    2016-01-01

    Changes in the trigger menu, the online algorithmic event-selection of the ATLAS experiment at the LHC in response to luminosity and detector changes are followed by adjustments in their monitoring system. This is done to ensure that the collected data is useful, and can be properly reconstructed at Tier-0, the first level of the computing grid. During Run 1, ATLAS deployed monitoring updates with the installation of new software releases at Tier-0. This created unnecessary overhead for developers and operators, and unavoidably led to different releases for the data-taking and the monitoring setup. We present a "trigger menu-aware" monitoring system designed for the ATLAS Run 2 data-taking. The new monitoring system aims to simplify the ATLAS operational workflows, and allows for easy and flexible monitoring configuration changes at the Tier-0 site via an Oracle DB interface. We present the design and the implementation of the menu-aware monitoring, along with lessons from the operational experience of the ne...

  7. First-year experience with the ATLAS online monitoring framework

    International Nuclear Information System (INIS)

    Corso-Radu, A

    2010-01-01

    ATLAS is one of the four experiments in the Large Hadron Collider (LHC) at CERN, which has been put in operation this year. The challenging experimental environment and the extreme detector complexity required development of a highly scalable distributed monitoring framework, which is currently being used to monitor the quality of the data being taken as well as operational conditions of the hardware and software elements of the detector, trigger and data acquisition systems. At the moment the ATLAS Trigger/DAQ system is distributed over more than 1000 computers, which is about one third of the final ATLAS size. At every minute of an ATLAS data taking session the monitoring framework serves several thousands physics events to monitoring data analysis applications, handles more than 4 million histograms updates coming from more than 4 thousands applications, executes 10 thousands advanced data quality checks for a subset of those histograms, displays histograms and results of these checks on several dozens of monitors installed in main and satellite ATLAS control rooms. This note presents the overview of the online monitoring software framework, and describes the experience, which was gained during an extensive commissioning period as well as at the first phase of LHC beam in September 2008. Performance results, obtained on the current ATLAS DAQ system will also be presented, showing that the performance of the framework is adequate for the final ATLAS system.

  8. LASER monitoring system for the ATLAS Tile Calorimeter

    International Nuclear Information System (INIS)

    Viret, S.

    2010-01-01

    The ATLAS detector at the Large Hadron Collider (LHC) at CERN uses a scintillator-iron technique for its hadronic Tile Calorimeter (TileCal). Scintillating light is readout via 9852 photomultiplier tubes (PMTs). Calibration and monitoring of these PMTs are made using a LASER based system. Short light pulses are sent simultaneously into all the TileCal photomultiplier's tubes (PMTs) during ATLAS physics runs, thus providing essential information for ATLAS data quality and monitoring analyses. The experimental setup developed for this purpose is described as well as preliminary results obtained during ATLAS commissioning phase in 2008.

  9. ATLAS diamond Beam Condition Monitor

    CERN Document Server

    Gorišek, A; Dolenc, I; Frais-Kölbl, H; Griesmayer, E; Kagan, H; Korpar, S; Kramberger, G; Mandic, I; Meyer, M; Mikuz, M; Pernegger, H; Smith, S; Trischuk, W; Weilhammer, P; Zavrtanik, M

    2007-01-01

    The ATLAS experiment has chosen to use diamond for its Beam Condition Monitor (BCM) given its radiation hardness, low capacitance and short charge collection time. In addition, due to low leakage current diamonds do not require cooling. The ATLAS Beam Condition Monitoring system is based on single beam bunch crossing measurements rather than integrating the accumulated particle flux. Its fast electronics will allow separation of LHC collisions from background events such as beam gas interactions or beam accidents. There will be two stations placed symmetrically about the interaction point along the beam axis at . Timing of signals from the two stations will provide almost ideal separation of beam–beam interactions and background events. The ATLAS BCM module consists of diamond pad detectors of area and thickness coupled to a two-stage RF current amplifier. The production of the final detector modules is almost done. A S/N ratio of 10:1 has been achieved with minimum ionizing particles (MIPs) in the test bea...

  10. ATLAS fast physics monitoring

    Indian Academy of Sciences (India)

    2012-11-16

    Nov 16, 2012 ... laboration has set up a framework to automatically process the ... ing (FPM) is complementary to data quality monitoring as problems may ... the full power of the ATLAS software framework Athena [4] and the availability of the.

  11. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Di Girolamo, A; Jezequel, S; Ueda, I; Wenaus, T

    2013-01-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources.\\\\ During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visua...

  12. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Di Girolamo, A; Jezequel, S; Ueda, I; Wenaus, T

    2014-01-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources.\\\\ During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visua...

  13. Radiation damage monitoring in the ATLAS pixel detector

    International Nuclear Information System (INIS)

    Seidel, Sally

    2013-01-01

    We describe the implementation of radiation damage monitoring using measurement of leakage current in the ATLAS silicon pixel sensors. The dependence of the leakage current upon the integrated luminosity is presented. The measurement of the radiation damage corresponding to an integrated luminosity 5.6 fb −1 is presented along with a comparison to a model. -- Highlights: ► Radiation damage monitoring via silicon leakage current is implemented in the ATLAS (LHC) pixel detector. ► Leakage currents measured are consistent with the Hamburg/Dortmund model. ► This information can be used to validate the ATLAS simulation model.

  14. The ATLAS Forward Physics Program

    OpenAIRE

    Royon, C

    2010-01-01

    After a brief review of the approved ATLAS forward detector system we describe the main ATLAS forward physics program. This program currently includes such topics as soft and hard diffraction, double pomeron exchange, central exclusive production, rapidity gap survival, two photon physics, the determination of the total cross-section and the determination of the absolute luminosity A possible high luminosity upgrade program involving new forward proton detectors is also briefly reviewed. This...

  15. ATLAS diamond Beam Condition Monitor

    Energy Technology Data Exchange (ETDEWEB)

    Gorisek, A. [CERN (Switzerland)]. E-mail: andrej.gorisek@cern.ch; Cindro, V. [J. Stefan Institute (Slovenia); Dolenc, I. [J. Stefan Institute (Slovenia); Frais-Koelbl, H. [Fotec (Austria); Griesmayer, E. [Fotec (Austria); Kagan, H. [Ohio State University, OH (United States); Korpar, S. [J. Stefan Institute (Slovenia); Kramberger, G. [J. Stefan Institute (Slovenia); Mandic, I. [J. Stefan Institute (Slovenia); Meyer, M. [CERN (Switzerland); Mikuz, M. [J. Stefan Institute (Slovenia); Pernegger, H. [CERN (Switzerland); Smith, S. [Ohio State University, OH (United States); Trischuk, W. [University of Toronto (Canada); Weilhammer, P. [CERN (Switzerland); Zavrtanik, M. [J. Stefan Institute (Slovenia)

    2007-03-01

    The ATLAS experiment has chosen to use diamond for its Beam Condition Monitor (BCM) given its radiation hardness, low capacitance and short charge collection time. In addition, due to low leakage current diamonds do not require cooling. The ATLAS Beam Condition Monitoring system is based on single beam bunch crossing measurements rather than integrating the accumulated particle flux. Its fast electronics will allow separation of LHC collisions from background events such as beam gas interactions or beam accidents. There will be two stations placed symmetrically about the interaction point along the beam axis at z=+/-183.8cm. Timing of signals from the two stations will provide almost ideal separation of beam-beam interactions and background events. The ATLAS BCM module consists of diamond pad detectors of 1cm{sup 2} area and 500{mu}m thickness coupled to a two-stage RF current amplifier. The production of the final detector modules is almost done. A S/N ratio of 10:1 has been achieved with minimum ionizing particles (MIPs) in the test beam setup at KEK. Results from the test beams and bench measurements are presented.

  16. ATLAS diamond Beam Condition Monitor

    International Nuclear Information System (INIS)

    Gorisek, A.; Cindro, V.; Dolenc, I.; Frais-Koelbl, H.; Griesmayer, E.; Kagan, H.; Korpar, S.; Kramberger, G.; Mandic, I.; Meyer, M.; Mikuz, M.; Pernegger, H.; Smith, S.; Trischuk, W.; Weilhammer, P.; Zavrtanik, M.

    2007-01-01

    The ATLAS experiment has chosen to use diamond for its Beam Condition Monitor (BCM) given its radiation hardness, low capacitance and short charge collection time. In addition, due to low leakage current diamonds do not require cooling. The ATLAS Beam Condition Monitoring system is based on single beam bunch crossing measurements rather than integrating the accumulated particle flux. Its fast electronics will allow separation of LHC collisions from background events such as beam gas interactions or beam accidents. There will be two stations placed symmetrically about the interaction point along the beam axis at z=+/-183.8cm. Timing of signals from the two stations will provide almost ideal separation of beam-beam interactions and background events. The ATLAS BCM module consists of diamond pad detectors of 1cm 2 area and 500μm thickness coupled to a two-stage RF current amplifier. The production of the final detector modules is almost done. A S/N ratio of 10:1 has been achieved with minimum ionizing particles (MIPs) in the test beam setup at KEK. Results from the test beams and bench measurements are presented

  17. Trigger Menu-aware Monitoring for the ATLAS experiment

    Science.gov (United States)

    Hoad, Xanthe; ATLAS Collaboration

    2017-10-01

    We present a“trigger menu-aware” monitoring system designed for the Run-2 data-taking of the ATLAS experiment at the LHC. Unlike Run-1, where a change in the trigger menu had to be matched by the installation of a new software release at Tier-0, the new monitoring system aims to simplify the ATLAS operational workflows. This is achieved by integrating monitoring updates in a quick and flexible manner via an Oracle DB interface. We present the design and the implementation of the menu-aware monitoring, along with lessons from the operational experience of the new system with the 2016 collision data.

  18. ATLAS BigPanDA Monitoring and Its Evolution

    CERN Document Server

    Wenaus, Torre; The ATLAS collaboration; Korchuganova, Tatiana

    2016-01-01

    BigPanDA is the latest generation of the monitoring system for the Production and Distributed Analysis (PanDA) system. The BigPanDA monitor is a core component of PanDA and also serves the monitoring needs of the new ATLAS Production System Prodsys-2. BigPanDA has been developed to serve the growing computation needs of the ATLAS Experiment and the wider applications of PanDA beyond ATLAS. Through a system-wide job database, the BigPanDA monitor provides a comprehensive and coherent view of the tasks and jobs executed by the system, from high level summaries to detailed drill-down job diagnostics. The system has been in production and has remained in continuous development since mid 2014, today effectively managing more than 2 million jobs per day distributed over 150 computing centers worldwide. BigPanDA also delivers web-based analytics and system state views to groups of users including distributed computing systems operators, shifters, physicist end-users, computing managers and accounting services. Provi...

  19. Luminosity Monitoring in ATLAS with MPX Detectors

    CERN Document Server

    AUTHOR|(CDS)2086061

    2013-01-01

    The ATLAS-MPX detectors are based on the Medipix2 silicon devices designed by CERN for the detection of multiple types of radiation. Sixteen such detectors were successfully operated in the ATLAS detector at the LHC and collected data independently of the ATLAS data-recording chain from 2008 to 2013. Each ATLAS-MPX detector provides separate measurements of the bunch-integrated LHC luminosity. An internal consistency for luminosity monitoring of about 2% was demonstrated. In addition, the MPX devices close to the beam are sensitive enough to provide relative-luminosity measurements during van der Meer calibration scans, in a low-luminosity regime that lies below the sensitivity of the ATLAS calorimeter-based bunch-integrating luminometers. Preliminary results from these luminosity studies are presented for 2012 data taken at $\\sqrt{s}=8$ TeV proton-proton collisions.

  20. Online remote monitoring facilities for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Feng, E; Hauser, R; Yakovlev, A; Zaytsev, A

    2011-01-01

    ATLAS is one of the 4 LHC experiments which started to be operated in the collisions mode in 2010. The ATLAS apparatus itself as well as the Trigger and the DAQ system are extremely complex facilities which have been built up by the collaboration including 144 institutes from 33 countries. The effective running of the experiment is supported by a large number of experts distributed all over the world. This paper describes the online remote monitoring system which has been developed in the ATLAS Trigger and DAQ(TDAQ) community in order to support efficient participation of the experts from remote institutes in the exploitation of the experiment. The facilities provided by the remote monitoring system are ranging from the WEB based access to the general status and data quality for the ongoing data taking session to the scalable service providing real-time mirroring of the detailed monitoring data from the experimental area to the dedicated computers in the CERN public network, where this data is made available ...

  1. Common accounting system for monitoring the ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Karavakis, E; Andreeva, J; Campana, S; Saiz, P; Gayazov, S; Jezequel, S; Sargsyan, L; Schovancova, J; Ueda, I

    2014-01-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  2. Event-Driven Messaging for Offline Data Quality Monitoring at ATLAS

    CERN Document Server

    Onyisi, Peter; The ATLAS collaboration

    2015-01-01

    During LHC Run 1, the information flow through the offline data quality monitoring in ATLAS relied heavily on chains of processes polling each other's outputs for handshaking purposes. This resulted in a fragile architecture with many possible points of failure and an inability to monitor the overall state of the distributed system. We report on the status of a project undertaken during the LHC shutdown to replace the ad hoc synchronization methods with a uniform message queue system. This enables the use of standard protocols to connect processes on multiple hosts; reliable transmission of messages between possibly unreliable programs; easy monitoring of the information flow; and the removal of inefficient polling-based communication.

  3. ATLAS program for advanced thermal-hydraulic safety research

    International Nuclear Information System (INIS)

    Song, Chul-Hwa; Choi, Ki-Yong; Kang, Kyoung-Ho

    2015-01-01

    Highlights: • Major achievements of the ATLAS program are highlighted in conjunction with both developing advanced light water reactor technologies and enhancing the nuclear safety. • The ATLAS data was shown to be useful for the development and licensing of new reactors and safety analysis codes, and also for nuclear safety enhancement through domestic and international cooperative programs. • A future plan for the ATLAS testing is introduced, covering recently emerging safety issues and some generic thermal-hydraulic concerns. - Abstract: This paper highlights the major achievements of the ATLAS program, which is an integral effect test program for both developing advanced light water reactor technologies and contributing to enhancing nuclear safety. The ATLAS program is closely related with the development of the APR1400 and APR"+ reactors, and the SPACE code, which is a best-estimate system-scale code for a safety analysis of nuclear reactors. The multiple roles of ATLAS testing are emphasized in very close conjunction with the development, licensing, and commercial deployment of these reactors and their safety analysis codes. The role of ATLAS for nuclear safety enhancement is also introduced by taking some examples of its contributions to voluntarily lead to multi-body cooperative programs such as domestic and international standard problems. Finally, a future plan for the utilization of ATLAS testing is introduced, which aims at tackling recently emerging safety issues such as a prolonged station blackout accident and medium-size break LOCA, and some generic thermal-hydraulic concerns as to how to figure out multi-dimensional phenomena and the scaling issue.

  4. The GNAM system in the ATLAS online monitoring framework

    Energy Technology Data Exchange (ETDEWEB)

    Salvatore, D. [INFN Cosenza and Dip. di Fisica, Universita della Calabria, ponte P. Bucci 31 C, 87036 Rende (Italy)], E-mail: daniela.salvatore@cern.ch; Adragna, P. [Queen Mary, University of London, London (United Kingdom); Bosman, M. [IFAE, Institut de Fisica de Altes Energies, UAB/Barcelona (Spain); Burckhart, D. [CERN, Geneva (Switzerland); Caprini, M. [National Institute for Physics and Nuclear Engineering, Bucharest (Romania); Corso-Radu, A. [University of California Irvine, Irvine, California (United States); Costa, M.J. [CERN, Geneva (Switzerland); Della Pietra, M. [INFN Sezione diNapoli, Napoli (Italy); Dotti, A. [Universita and INFN Pisa, Pisa (Italy); Eschrich, I. [University of California Irvine, Irvine, California (United States); Ferrari, R. [INFN Sezione di Pavia, Pavia (Italy); Ferrer, M.L. [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Gaudio, G. [INFN Sezione di Pavia, Pavia (Italy); Hadavand, H. [Southern Methodist University, Dallas (United States); Hauschild, M. [CERN, Geneva (Switzerland); Hillier, S. [University of Birmingham, Birmingham (United Kingdom); Kehoe, B. [Southern Methodist University, Dallas (United States); Kolos, S. [University of California Irvine, Irvine, California (United States); Kordas, K. [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Mcpherson, R. [University of Victoria, Vancouver (Canada)] (and others)

    2007-10-15

    ATLAS [ATLAS Collaboration, 'ATLAS Technical Proposal', CERN/LHHCC/94-43, LHCC/P2, CERN, Geneva, Switzerland, 1994] is one of the four experiments under construction along the Large Hadron Collider (LHC) ring, which will produce interactions at a center of mass energy of 14 TeV at 40 MHz rate. The detector consists of more than 140 million electronic channels. The challenging experimental environment and the extreme detector complexity impose the necessity of a common scalable distributed monitoring framework, which can be tuned for the optimal use by different ATLAS detectors at the various levels of the ATLAS data flow.

  5. The GNAM system in the ATLAS online monitoring framework

    Energy Technology Data Exchange (ETDEWEB)

    Salvatore, D. [INFN Cosenza and Dip. di Fisica, Universita della Calabria, ponte P. Bucci 31 C, 87036 Rende (Italy)], E-mail: daniela.salvatore@cern.ch; Adragna, P [Queen Mary, University of London, London (United Kingdom); Bosman, M [IFAE, Institut de Fisica de Altes Energies, UAB/Barcelona (Spain); Burckhart, D [CERN, Geneva (Switzerland); Caprini, M [National Institute for Physics and Nuclear Engineering, Bucharest (Romania); Corso-Radu, A [University of California Irvine, Irvine, California (United States); Costa, M J [CERN, Geneva (Switzerland); Della Pietra, M [INFN Sezione diNapoli, Napoli (Italy); Dotti, A [Universita and INFN Pisa, Pisa (Italy); Eschrich, I [University of California Irvine, Irvine, California (United States); Ferrari, R [INFN Sezione di Pavia, Pavia (Italy); Ferrer, M L [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Gaudio, G [INFN Sezione di Pavia, Pavia (Italy); Hadavand, H [Southern Methodist University, Dallas (United States); Hauschild, M [CERN, Geneva (Switzerland); Hillier, S [University of Birmingham, Birmingham (United Kingdom); Kehoe, B [Southern Methodist University, Dallas (United States); Kolos, S [University of California Irvine, Irvine, California (United States); Kordas, K [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Mcpherson, R [University of Victoria, Vancouver (Canada)

    2007-10-15

    ATLAS [ATLAS Collaboration, 'ATLAS Technical Proposal', CERN/LHHCC/94-43, LHCC/P2, CERN, Geneva, Switzerland, 1994] is one of the four experiments under construction along the Large Hadron Collider (LHC) ring, which will produce interactions at a center of mass energy of 14 TeV at 40 MHz rate. The detector consists of more than 140 million electronic channels. The challenging experimental environment and the extreme detector complexity impose the necessity of a common scalable distributed monitoring framework, which can be tuned for the optimal use by different ATLAS detectors at the various levels of the ATLAS data flow.

  6. The GNAM system in the ATLAS online monitoring framework

    International Nuclear Information System (INIS)

    Salvatore, D.; Adragna, P.; Bosman, M.; Burckhart, D.; Caprini, M.; Corso-Radu, A.; Costa, M.J.; Della Pietra, M.; Dotti, A.; Eschrich, I.; Ferrari, R.; Ferrer, M.L.; Gaudio, G.; Hadavand, H.; Hauschild, M.; Hillier, S.; Kehoe, B.; Kolos, S.; Kordas, K.; Mcpherson, R.

    2007-01-01

    ATLAS [ATLAS Collaboration, 'ATLAS Technical Proposal', CERN/LHHCC/94-43, LHCC/P2, CERN, Geneva, Switzerland, 1994] is one of the four experiments under construction along the Large Hadron Collider (LHC) ring, which will produce interactions at a center of mass energy of 14 TeV at 40 MHz rate. The detector consists of more than 140 million electronic channels. The challenging experimental environment and the extreme detector complexity impose the necessity of a common scalable distributed monitoring framework, which can be tuned for the optimal use by different ATLAS detectors at the various levels of the ATLAS data flow

  7. The ATLAS Beam Conditions Monitor

    International Nuclear Information System (INIS)

    Cindro, V; Dolenc, I; Kramberger, G; Macek, B; Mandic, I; Mikuz', M; Zavrtanik, M; Dobos, D; Gorisek, A; Pernegger, H; Weilhammer, P; Frais-Koelbl, H; Griesmayer, E; Niegl, M; Kagan, H; Tardif, D; Trischuk, W

    2008-01-01

    Beam conditions and the potential detector damage resulting from their anomalies have pushed the LHC experiments to build their own beam monitoring devices. The ATLAS Beam Conditions Monitor (BCM) consists of two stations (forward and backward) of detectors each with four modules. The sensors are required to tolerate doses up to 500 kGy and in excess of 10 15 charged particles per cm 2 over the lifetime of the experiment. Each module includes two diamond sensors read out in parallel. The stations are located symmetrically around the interaction point, positioning the diamond sensors at z = ±184 cm and r = 55 mm (a pseudo- rapidity of about 4.2). Equipped with fast electronics (2 ns rise time) these stations measure time-of-flight and pulse height to distinguish events resulting from lost beam particles from those normally occurring in proton-proton interactions. The BCM also provides a measurement of bunch-by-bunch luminosities in ATLAS by counting in-time and out-of-time collisions. Eleven detector modules have been fully assembled and tested. Tests performed range from characterisation of diamond sensors to full module tests with electron sources and in proton testbeams. Testbeam results from the CERN SPS show a module median-signal to noise of 11:1 for minimum ionising particles incident at a 45-degree angle. The best eight modules were installed on the ATLAS pixel support frame that was inserted into ATLAS in the summer of 2007. This paper describes the full BCM detector system along with simulation studies being used to develop the logic in the back-end FPGA coincidence hardware

  8. The ATLAS Beam Conditions Monitor

    Energy Technology Data Exchange (ETDEWEB)

    Cindro, V; Dolenc, I; Kramberger, G; Macek, B; Mandic, I; Mikuz' , M; Zavrtanik, M [Jozef Stefan Institute and Department of Physics, University of Ljubljana, Ljubljana (Slovenia); Dobos, D; Gorisek, A; Pernegger, H; Weilhammer, P [CERN, Geneva (Switzerland); Frais-Koelbl, H; Griesmayer, E; Niegl, M [University of Applied Sciences Wiener Neustadt and Fotec, Wiener Neustadt (Austria); Kagan, H [Ohio State University, Columbus (United States); Tardif, D; Trischuk, W [University of Toronto, Toronto (Canada)], E-mail: william@physics.utoronto.ca

    2008-02-15

    Beam conditions and the potential detector damage resulting from their anomalies have pushed the LHC experiments to build their own beam monitoring devices. The ATLAS Beam Conditions Monitor (BCM) consists of two stations (forward and backward) of detectors each with four modules. The sensors are required to tolerate doses up to 500 kGy and in excess of 10{sup 15} charged particles per cm{sup 2} over the lifetime of the experiment. Each module includes two diamond sensors read out in parallel. The stations are located symmetrically around the interaction point, positioning the diamond sensors at z = {+-}184 cm and r = 55 mm (a pseudo- rapidity of about 4.2). Equipped with fast electronics (2 ns rise time) these stations measure time-of-flight and pulse height to distinguish events resulting from lost beam particles from those normally occurring in proton-proton interactions. The BCM also provides a measurement of bunch-by-bunch luminosities in ATLAS by counting in-time and out-of-time collisions. Eleven detector modules have been fully assembled and tested. Tests performed range from characterisation of diamond sensors to full module tests with electron sources and in proton testbeams. Testbeam results from the CERN SPS show a module median-signal to noise of 11:1 for minimum ionising particles incident at a 45-degree angle. The best eight modules were installed on the ATLAS pixel support frame that was inserted into ATLAS in the summer of 2007. This paper describes the full BCM detector system along with simulation studies being used to develop the logic in the back-end FPGA coincidence hardware.

  9. ATLAS BigPanDA Monitoring

    CERN Document Server

    Padolski, Siarhei; The ATLAS collaboration; Klimentov, Alexei; Korchuganova, Tatiana

    2017-01-01

    BigPanDA monitoring is a web based application which provides various processing and representation of the Production and Distributed Analysis (PanDA) system objects states. Analyzing hundreds of millions of computation entities such as an event or a job BigPanDA monitoring builds different scale and levels of abstraction reports in real time mode. Provided information allows users to drill down into the reason of a concrete event failure or observe system bigger picture such as tracking the computation nucleus and satellites performance or the progress of whole production campaign. PanDA system was originally developed for the Atlas experiment and today effectively managing more than 2 million jobs per day distributed over 170 computing centers worldwide. BigPanDA is its core component commissioned in the middle of 2014 and now is the primary source of information for ATLAS users about state of their computations and the source of decision support information for shifters, operators and managers. In this wor...

  10. ATLAS BigPanDA Monitoring

    CERN Document Server

    Padolski, Siarhei; The ATLAS collaboration

    2017-01-01

    BigPanDA monitoring is a web-based application that provides various processing and representation of the Production and Distributed Analysis (PanDA) system objects states. Analysing hundreds of millions of computation entities such as an event or a job BigPanDA monitoring builds different scale and levels of abstraction reports in real time mode. Provided information allows users to drill down into the reason of a concrete event failure or observe system bigger picture such as tracking the computation nucleus and satellites performance or the progress of whole production campaign. PanDA system was originally developed for the Atlas experiment and today effectively managing more than 2 million jobs per day distributed over 170 computing centers worldwide. BigPanDA is its core component commissioned in the middle of 2014 and now is the primary source of information for ATLAS users about state of their computations and the source of decision support information for shifters, operators and managers. In this work...

  11. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    Science.gov (United States)

    Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration

    2014-06-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.

  12. Online remote monitoring facilities for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Feng, E; Hauser, R; Yakovlev, A; Zaytsev, A

    2010-01-01

    ATLAS is one of the 4 LHC experiments which started to be operated in the collisions mode in 2010. The ATLAS apparatus itself as well as the Trigger and the DAQ system are extremely complex facilities which have been built up by the collaboration including 144 institutes from 33 countries. The effective running of the experiment is supported by a large number of experts distributed all over the world. This paper describes the online remote monitoring system which has been developed in the ATLAS TDAQ community in order to support efficient participation of the experts from remote institutes in the exploitation of the experiment. The facilities provided by the remote monitoring system are ranging from the WEB based access to the general status and data quality for the ongoing data taking session to the scalable service providing real-time mirroring of the detailed monitoring data from the experimental area to the dedicated computers in the CERN public network, where this data is made available to remote users t...

  13. World-wide online monitoring interface of the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Mineev, M; Hauser, R; Salnikov, A

    2014-01-01

    The ATLAS collaboration accounts for more than 3000 members located all over the world. The efficiency of the experiment can be improved allowing system experts not present on site to follow the ATLAS operations in real-time, spotting potential problems which otherwise may remain unattended for a non-negligible time. Taking into account the wide geographical spread of the ATLAS collaboration, the solution of this problem is to have all monitoring information with minimal access latency available world-wide. We have implemented a framework which defines a standard approach for retrieving arbitrary monitoring information from the ATLAS private network via HTTP. An information request is made by specifying one of the predefined URLs with some optional parameters refining data which has to be shipped back in XML format. The framework takes care of receiving, parsing and forwarding such requests to the appropriate plugins. The plugins retrieve the requested data and convert it to XML (or optionally to JSON) format...

  14. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    Marjanovic, Marija; The ATLAS collaboration

    2018-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibers to photo-multiplier tubes (PMTs), located in the outer part of the calorimeter. The readout is segmented into about 5000 cells, each one being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of the full readout chain during the data taking, a set of calibration sub-systems is used. The TileCal calibration system comprises Cesium radioactive sources, laser, charge injection elements, and an integrator based readout system. Combined information from all systems allows to monitor and to equalize the calorimeter response at each stage of the signal evolution, from scintillation light to digitization. Calibration runs are monitored from a data quality perspective and u...

  15. First Results from the Online Radiation Dose Monitoring System in ATLAS experiment

    CERN Document Server

    Mandić, I; The ATLAS collaboration; Deliyergiyev, M; Gorišek, A; Kramberger, G; Mikuž, M; Franz, S; Hartert, J; Dawson, I; Miyagawa, P S; Nicolas, L

    2011-01-01

    High radiation doses which will accumulate in components of ATLAS experiment during data taking will cause damage to detectors and readout electronics. It is therefore important to continuously monitor the doses to estimate the level of degradation caused by radiation. Online radiation monitoring system measures ionizing dose in SiO2 and fluences of 1-MeV(Si) equivalent neutrons and thermal neutrons at several locations in ATLAS detector. In this paper measurements collected during two years of ATLAS data taking are presented and compared to predictions from radiation background simulations.

  16. Monitoring of Computing Resource Use of Active Software Releases in ATLAS

    CERN Document Server

    Limosani, Antonio; The ATLAS collaboration

    2016-01-01

    The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the Tier0 at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as "MemoryMonitor", to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed...

  17. Monitoring of computing resource use of active software releases at ATLAS

    Science.gov (United States)

    Limosani, Antonio; ATLAS Collaboration

    2017-10-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed in plots generated using Python visualization libraries and collected into pre-formatted auto-generated Web pages, which allow the ATLAS developer community to track the performance of their algorithms. This information is however preferentially filtered to domain leaders and developers through the use of JIRA and via reports given at ATLAS software meetings. Finally, we take a glimpse of the future by reporting on the expected CPU and RAM usage in benchmark workflows associated with the High Luminosity LHC and anticipate the ways performance monitoring will evolve to understand and benchmark future workflows.

  18. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    Cortes-Gonzalez, Arely; The ATLAS collaboration

    2017-01-01

    The ATLAS Tile Calorimeter is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes, located in the outer part of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two photomultiplier in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The calibration system comprises Cesium radioactive sources, laser, charge injection elements and an integrator based readout system. Combined information from all systems allows to monitor and equalise the calorimeter r...

  19. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00445232; The ATLAS collaboration

    2016-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs), located on the outside of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The TileCal calibration system comprises Cesium radioactive sources, laser and charge injection elements and it allows to monitor and equalize the calorimeter response at each stage of the signal production, from scin...

  20. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00445232; The ATLAS collaboration

    2017-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs), located on the outside of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The TileCal calibration system comprises cesium radioactive sources, Laser and charge injection elements, and allows for monitoring and equalization of the calorimeter response at each stage of the signal production, ...

  1. ATLAS Tier-2 monitoring system for the German cloud

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Joerg; Quadt, Arnulf; Weber, Pavel [II. Physikalisches Institut, Georg-August-Universitaet, Goettingen (Germany)

    2011-07-01

    The ATLAS tier centers in Germany provide their computing resources for the ATLAS experiment. The stable and sustainable operation of this so-called DE-cloud heavily relies on effective monitoring of the Tier-1 center GridKa and its associated Tier-2 centers. Central and local grid information services constantly collect and publish the status information from many computing resources and sites. The cloud monitoring system discussed in this presentation evaluates the information related to different cloud resources and provides a coherent and comprehensive view of the cloud. The main monitoring areas covered by the tool are data transfers, cloud software installation, site batch systems, Service Availability Monitoring (SAM). The cloud monitoring system consists of an Apache-based Python application, which retrieves the information and publishes it on the generated HTML web page. This results in an easy-to-use web interface for the limited number of sites in the cloud with fast and efficient access to the required information starting from a high level summary for the whole cloud to detailed diagnostics for the single site services. This approach provides the efficient identification of correlated site problems and simplifies the administration on both cloud and site level.

  2. Monitoring of computing resource use of active software releases at ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219183; The ATLAS collaboration

    2017-01-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and dis...

  3. Monitored Drift Chambers in the ATLAS Detector

    CERN Multimedia

    Herten, G

    Monitored Drift Chambers (MDT) are used in the ATLAS Detector to measure the momentum of high energy muons. They consist of drift tubes, which are filled with an Ar-CO2 gas mixture at 3 bar gas pressure. About 1200 drift chambers are required for ATLAS. They are up to 6 m long. Nevertheless the position of every wire needs to be known with a precision of 20 µm within a chamber. In addition, optical alignment sensors are required to measure the relative position of adjacent chambers with a precision of 30µm. This gigantic task seems impossible at first instance. Indeed it took many years of R&D to invent the right tools and methods before the first chamber could be built according to specifications. Today, at the time when 50% of the chambers have been produced, we are confident that the goal for ATLAS can be reached. The mechanical precision of the chambers could be verified with the x-ray tomograph at CERN. This ingenious device, developed for the MDT system, is able to measure the wire position insid...

  4. Monitoring individual traffic flows within the ATLAS TDAQ network

    International Nuclear Information System (INIS)

    Sjoen, R; Batraneanu, S M; Leahu, L; Martin, B; Al-Shabibi, A; Stancu, S; Ciobotaru, M

    2010-01-01

    The ATLAS data acquisition system consists of four different networks interconnecting up to 2000 processors using up to 200 edge switches and five multi-blade chassis devices. The architecture of the system has been described in [1] and its operational model in [2]. Classical, SNMP-based, network monitoring provides statistics on aggregate traffic, but for performance monitoring and troubleshooting purposes there was an imperative need to identify and quantify single traffic flows. sFlow [3] is an industry standard based on statistical sampling which attempts to provide a solution to this. Due to the size of the ATLAS network, the collection and analysis of the sFlow data from all devices generates a data handling problem of its own. This paper describes how this problem is addressed by making it possible to collect and store data either centrally or distributed according to need. The methods used to present the results in a relevant fashion for system analysts are discussed and we explore the possibilities and limitations of this diagnostic tool, giving an example of its use in solving system problems that arise during the ATLAS data taking.

  5. ATLAS Fast Physics Monitoring: TADA

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00375930; The ATLAS collaboration; Elsing, Markus

    2017-01-01

    The ATLAS experiment at the LHC is recording data from proton-proton collisions with 13 TeV center-of-mass energy since spring 2015. The collaboration is using a fast physics monitoring framework (TADA) to automatically perform a broad range of fast searches for early signs of new physics and to monitor the data quality across the year with the full analysis level calibrations applied to the rapidly growing data.TADA is designed to provide fast feedback directly after the collected data has been fully calibrated and processed at the Tier-0, the CERN Data Center. The system can monitor a large range of physics channels, offline data quality and physics performance quantities nearly final analysis level object calibrations. TADA output is available on a website accessible by the whole collaboration that gets updated twice a day with the data from newly processed runs. Hints of potentially interesting physics signals or performance issues identified in this way are reported to be followed up by physics or combin...

  6. ATLAS Fast Physics Monitoring: TADA

    CERN Document Server

    Elsing, Markus; The ATLAS collaboration; Sabato, Gabriele; Kamioka, Shusei; Nairz, Armin Michael; Moyse, Edward; Gumpert, Christian

    2016-01-01

    The ATLAS Experiment at the LHC is recording data from proton-proton collisions with 13 TeV center-of-mass energy since spring 2015. The collaboration is using a fast physics monitoring framework (TADA) to automatically perform a broad range of fast searches for early signs of new physics and to monitor the data quality across the year with the full analysis level calibrations applied to the rapidly growing data. TADA is designed to provide fast feedback directly after the collected data has been fully calibrated and processed at the Tier-0. The system can monitor a large range of physics channels, offline data quality and physics performance quantities nearly final analysis level object calibrations. TADA output is available on a website accessible by the whole collaboration that gets updated twice a day with the data from newly processed runs. Hints of potentially interesting physics signals or performance issues identified in this way are reported to be followed up by physics or combined performance groups...

  7. ATLAS off-Grid sites (Tier 3) monitoring. From local fabric monitoring to global overview of the VO computing activities

    CERN Document Server

    PETROSYAN, A; The ATLAS collaboration; BELOV, S; ANDREEVA, J; KADOCHNIKOV, I

    2012-01-01

    The ATLAS Distributed Computing activities have so far concentrated in the "central" part of the experiment computing system, namely the first 3 tiers (the CERN Tier0, 10 Tier1 centers and over 60 Tier2 sites). Many ATLAS Institutes and National Communities have deployed (or intend to) deploy Tier-3 facilities. Tier-3 centers consist of non-pledged resources, which are usually dedicated to data analysis tasks by the geographically close or local scientific groups, and which usually comprise a range of architectures without Grid middleware. Therefore a substantial part of the ATLAS monitoring tools which make use of Grid middleware, cannot be used for a large fraction of Tier3 sites. The presentation will describe the T3mon project, which aims to develop a software suite for monitoring the Tier3 sites, both from the perspective of the local site administrator and that of the ATLAS VO, thereby enabling the global view of the contribution from Tier3 sites to the ATLAS computing activities. Special attention in p...

  8. ATLAS Tile calorimeter calibration and monitoring systems

    Science.gov (United States)

    Chomont, Arthur; ATLAS Collaboration

    2017-11-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs), located on the outside of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The TileCal calibration system comprises cesium radioactive sources, Laser and charge injection elements, and allows for monitoring and equalization of the calorimeter response at each stage of the signal production, from scintillation light to digitization. Based on LHC Run 1 experience, several calibration systems were improved for Run 2. The lessons learned, the modifications, and the current LHC Run 2 performance are discussed.

  9. ATLAS Offline Software Performance Monitoring and Optimization

    CERN Document Server

    Chauhan, N; Kittelmann, T; Langenberg, R; Mandrysch , R; Salzburger, A; Seuster, R; Ritsch, E; Stewart, G; van Eldik, N; Vitillo, R

    2014-01-01

    In a complex multi-developer, multi-package software environment, such as the ATLAS offline Athena framework, tracking the performance of the code can be a non-trivial task in itself. In this paper we describe improvements in the instrumentation of ATLAS offline software that have given considerable insight into the performance of the code and helped to guide optimisation. Code can be instrumented firstly using the PAPI tool, which is a programing interface for accessing hardware performance counters. PAPI events can count floating point operations, cycles and instructions and cache accesses. Triggering PAPI to start/stop counting for each algorithm and processed event gives a good understanding of the whole algorithm level performance of ATLAS code. Further data can be obtained using pin, a dynamic binary instrumentation tool. Pintools can be used to obtain similar statistics as PAPI, but advantageously without requiring recompilation of the code. Fine grained routine and instruction level instrumentation is...

  10. ATLAS Offline Software Performance Monitoring and Optimization

    CERN Document Server

    Chauhan, N; The ATLAS collaboration; Kittelmann, T; Langenberg, R; Mandrysch , R; Salzburger, A; Seuster, R; Ritsch, E; Stewart, G; van Eldik, N; Vitillo, R

    2013-01-01

    In a complex multi-developer, multi-package software environment, such as the ATLAS offline Athena framework, tracking the performance of the code can be a non-trivial task in itself. In this paper we describe improvements in the instrumentation of ATLAS offline software that have given considerable insight into the performance of the code and helped to guide optimisation. Code can be instrumented firstly using the PAPI tool, which is a programing interface for accessing hardware performance counters. PAPI events can count floating point operations, cycles and instructions and cache accesses. Triggering PAPI to start/stop counting for each algorithm and processed event gives a good understanding of the whole algorithm level performance of ATLAS code. Further data can be obtained using pin, a dynamic binary instrumentation tool. Pintools can be used to obtain similar statistics as PAPI, but advantageously without requiring recompilation of the code. Fine grained routine and instruction level instrumentation is...

  11. The next generation of the ATLAS PanDA Monitoring System

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Klimentov, A; Love, P; Potekhin, M; Wenaus, T

    2014-01-01

    For many years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, with up to 1M completed jobs/day in 2013. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. Outside of ATLAS, the PanDA system is also being used in projects like AMS, LSST and a few others. It currently undergoes a significant redesign, both of the core server components responsible for workload management, brokerage and data access, and of the monitoring part, which is critically important for efficient execution of the workflow in a way that’s transparent to the user and also provides an effective set of tools for operational support. The new generation of the PanDA Monitoring Service is designed based on a proven, scalable, industry-standard Web Fr...

  12. Frameworks to monitor and predict resource usage in the ATLAS High Level Trigger

    CERN Document Server

    Martin, Tim; The ATLAS collaboration

    2016-01-01

    The ATLAS High Level Trigger Farm consists of around 30,000 CPU cores which filter events at up to 100 kHz input rate. A costing framework is built into the high level trigger, this enables detailed monitoring of the system and allows for data-driven predictions to be made utilising specialist datasets. This talk will present an overview of how ATLAS collects in-situ monitoring data on both CPU usage and dataflow over the data-acquisition network during the trigger execution, and how these data are processed to yield both low level monitoring of individual selection-algorithms and high level data on the overall performance of the farm. For development and prediction purposes, ATLAS uses a special `Enhanced Bias' event selection. This mechanism will be explained along with how is used to profile expected resource usage and output event-rate of new physics selections, before they are executed on the actual high level trigger farm.

  13. Intensive irradiation studies, monitoring and commissioning data analysis on the ATLAS MDT chambers

    CERN Document Server

    AUTHOR|(CDS)2071390; Susinno, Giancarlo

    2007-01-01

    The ATLAS MDT chambers have been extensively studied, starting from irradiation test to commissioning activities. First, a detailed description of high rate and high background tests is given. These tests have been carried out on a small ATLAS-like MDT chamber, by the Cosenza and Roma TRE groups. The precision tracking chambers of the muon spectrometer, in fact, have to operate for more than 10 years in the harsh LHC background, due mainly to low energy neutrons and photons. Aging effects, such as the deterioration of the tube themselves can appear and difficulties in pattern recognition and tracking may occur. Moreover an upgrade to Super-LHC is foreseen. Then, there is an accurate description of the MDTGnam package, the official software for the on-line monitoring of MDT performances. When dealing with a complex apparatus, such as the ATLAS experiment, an on-line monitoring system is a fundamental tool. The GNAM project, developed by Cosenza, Pavia, Pisa and Napoli groups, is a monitoring framework to be us...

  14. Radiation Damage Monitoring in the ATLAS Pixel Detector

    CERN Document Server

    Seidel, S

    2013-01-01

    We describe the implementation of radiation damage monitoring using measurement of leakage current in the ATLAS silicon pixel sensors. The dependence of the leakage current upon the integrated luminosity is presented. The measurement of the radiation damage corresponding to integrated luminosity 5.6 fb$^{-1}$ is presented along with a comparison to the theoretical model.

  15. First Results from the Online Radiation Dose Monitoring System in ATLAS experiment

    CERN Document Server

    Mandić, I; The ATLAS collaboration; Deliyergiyev, M; Gorišek, A; Kramberger, G; Mikuž, M; Franz, S; Hartert, J; Dawson, I; Miyagawa, P; Nicolas, L

    2011-01-01

    High radiation doses which will accumulate in components of ATLAS experiment during data taking will causes damage to detectors and readout electronics. It is therefore important to continuously monitor the doses to estimate the level of degradation caused by radiation. Online radiation monitoring system measures ionizing dose in SiO2 , displacement damage in silicon in terms of 1-MeV(Si) equivalent neutron fluence and fluence of thermal neutrons at several locations in ATLAS detector. In this paper design of the system, results of measurements and comparison of measured integrated doses and fluences with predictions from FLUKA simulation will be shown.

  16. ATLAS Tile Calorimeter calibration and monitoring systems

    Science.gov (United States)

    Cortés-González, Arely

    2018-01-01

    The ATLAS Tile Calorimeter is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes, located in the outer part of the calorimeter. Neutral particles may also produce a signal after interacting with the material and producing charged particles. The readout is segmented into about 5000 cells, each of them being read out by two photomultipliers in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. This comprises Cesium radioactive sources, Laser, charge injection elements and an integrator based readout system. Information from all systems allows to monitor and equalise the calorimeter response at each stage of the signal production, from scintillation light to digitisation. Calibration runs are monitored from a data quality perspective and used as a cross-check for physics runs. The data quality efficiency achieved during 2016 was 98.9%. These calibration and stability of the calorimeter reported here show that the TileCal performance is within the design requirements and has given essential contribution to reconstructed objects and physics results.

  17. The ATLAS PanDA Monitoring System and its Evolution

    CERN Document Server

    Klimentov, A; The ATLAS collaboration; Potekhin, M; Wenaus, T

    2011-01-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on PanDA design in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Important to meeting these and other requirements is a comprehensive monitoring system. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. We decided to migrat...

  18. The ATLAS PanDA Monitoring System and its Evolution

    CERN Document Server

    Klimentov, A; The ATLAS collaboration; Potekhin, M; Wenaus, T

    2010-01-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on PanDA design in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Important to meeting these and other requirements is a comprehensive monitoring system. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. We decided to migrat...

  19. Monitoring individual traffic flows within the ATLAS TDAQ network

    CERN Document Server

    Sjoen, R; Ciobotaru, M; Batraneanu, S M; Leahu, L; Martin, B; Al-Shabibi, A

    2010-01-01

    The ATLAS data acquisition system consists of four different networks interconnecting up to 2000 processors using up to 200 edge switches and five multi-blade chassis devices. The architecture of the system has been described in [1] and its operational model in [2]. Classical, SNMP-based, network monitoring provides statistics on aggregate traffic, but for performance monitoring and troubleshooting purposes there was an imperative need to identify and quantify single traffic flows. sFlow [3] is an industry standard based on statistical sampling which attempts to provide a solution to this. Due to the size of the ATLAS network, the collection and analysis of the sFlow data from all devices generates a data handling problem of its own. This paper describes how this problem is addressed by making it possible to collect and store data either centrally or distributed according to need. The methods used to present the results in a relevant fashion for system analysts are discussed and we explore the possibilities a...

  20. Rate Predictions and Trigger/DAQ Resource Monitoring in ATLAS

    CERN Document Server

    Schaefer, D M; The ATLAS collaboration

    2012-01-01

    Since starting in 2010, the Large Hadron Collider (LHC) has pro- duced collisions at an ever increasing rate. The ATLAS experiment successfully records the collision data with high eciency and excel- lent data quality. Events are selected using a three-level trigger system, where each level makes a more re ned selection. The level-1 trigger (L1) consists of a custom-designed hardware trigger which seeds two higher software based trigger levels. Over 300 triggers compose a trig- ger menu which selects physics signatures such as electrons, muons, particle jets, etc. Each trigger consumes computing resources of the ATLAS trigger system and oine storage. The LHC instantaneous luminosity conditions, desired physics goals of the collaboration, and the limits of the trigger infrastructure determine the composition of the ATLAS trigger menu. We describe a trigger monitoring frame- work for computing the costs of individual trigger algorithms such as data request rates and CPU consumption. This framework has been used...

  1. ATLAS fast physics monitoring: TADA

    Science.gov (United States)

    Sabato, G.; Elsing, M.; Gumpert, C.; Kamioka, S.; Moyse, E.; Nairz, A.; Eifert, T.; ATLAS Collaboration

    2017-10-01

    The ATLAS experiment at the LHC has been recording data from proton-proton collisions with 13 TeV center-of-mass energy since spring 2015. The collaboration is using a fast physics monitoring framework (TADA) to automatically perform a broad range of fast searches for early signs of new physics and to monitor the data quality across the year with the full analysis level calibrations applied to the rapidly growing data. TADA is designed to provide fast feedback directly after the collected data has been fully calibrated and processed at the Tier-0. The system can monitor a large range of physics channels, offline data quality and physics performance quantities. TADA output is available on a website accessible by the whole collaboration. It gets updated twice a day with the data from newly processed runs. Hints of potentially interesting physics signals or performance issues identified in this way are reported to be followed up by physics or combined performance groups. The note reports as well about the technical aspects of TADA: the software structure to obtain the input TAG files, the framework workflow and structure, the webpage and its implementation.

  2. The Education and Outreach Program of ATLAS

    CERN Multimedia

    Barnett, M.

    2006-01-01

    The ATLAS Education and Outreach (E&O) program began in 1997, but the advent of LHC has placed a new urgency in our efforts. Even a year away, we can feel the approaching impact of starting an experiment that could make revolutionary discoveries. The public and teachers are beginning to turn their attention our way, and the newsmedia are showing growing interest in ATLAS. When datataking begins, the interest will peak, and the demands on us are likely to be substantial. The collaboration is responding to this challenge in a number of ways. ATLAS management has begun consultation with experts. The official budget for the E&O group has been growing as have the contributions of many ATLAS institutions. The number of collaboration members joining these efforts has grown, and their time and effort is increasing. We are in ongoing consultation with the CERN Public Affairs Office, as well as the other LHC experiments and the European Particle Physics Outreach Group. The E&O group has expanded the scope...

  3. Processing and Quality Monitoring for the ATLAS Tile Hadronic Calorimeter Data

    CERN Document Server

    Burghgrave, Blake; The ATLAS collaboration

    2016-01-01

    We present an overview of Data Processing and Data Quality (DQ) Monitoring for the ATLAS Tile Hadronic Calorimeter. Calibration runs are monitored from a data quality perspective and used as a cross-check for physics runs. Data quality in physics runs is monitored extensively and continuously. Any problems are reported and immediately investigated. The DQ efficiency achieved was 99.6% in 2012 and 100% in 2015, after the detector maintenance in 2013-2014. Changes to detector status or calibrations are entered into the conditions database during a brief calibration loop between when a run ends and bulk processing begins. Bulk processed data is reviewed and certified for the ATLAS Good Run List if no problem is detected. Experts maintain the tools used by DQ shifters and the calibration teams during normal operation, and prepare new conditions for data reprocessing and MC production campaigns. Conditions data are stored in 3 databases: Online DB, Offline DB for data and a special DB for Monte Carlo. Database upd...

  4. Processing and Quality Monitoring for the ATLAS Tile Hadronic Calorimeter Data

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00354209; The ATLAS collaboration

    2017-01-01

    An overview is presented of Data Processing and Data Quality (DQ) Monitoring for the ATLAS Tile Hadronic Calorimeter. Calibration runs are monitored from a data quality perspective and used as a cross-check for physics runs. Data quality in physics runs is monitored extensively and continuously. Any problems are reported and immediately investigated. The DQ efficiency achieved was 99.6% in 2012 and 100% in 2015, after the detector maintenance in 2013-2014. Changes to detector status or calibrations are entered into the conditions database (DB) during a brief calibration loop between the end of a run and the beginning of bulk processing of data collected in it. Bulk processed data are reviewed and certified for the ATLAS Good Run List if no problem is detected. Experts maintain the tools used by DQ shifters and the calibration teams during normal operation, and prepare new conditions for data reprocessing and Monte Carlo (MC) production campaigns. Conditions data are stored in 3 databases: Online DB, Offline D...

  5. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    Boumediene, Djamel Eddine; The ATLAS collaboration

    2017-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs). PMT signals are then digitized at 40 MHz and stored on detector and are only transferred off detector once the first level trigger acceptance has been confirmed. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain, a set of calibration systems is used. The TileCal calibration system comprises Cesium radioactive sources, laser, charge injection elements and an integrator b...

  6. Recent results from the ATLAS heavy ion program

    CERN Document Server

    Havener, Laura Brittany; The ATLAS collaboration

    2018-01-01

    The heavy-ion program in the ATLAS experiment at the LHC originated as an extensive program to probe and characterize the hot, dense matter created in relativistic lead-lead collisions. In recent years, the program has also broadened to a detailed study of collective behavior in smaller systems. In particular, the techniques used to study larger systems are also applied to proton-proton and proton-lead collisions over a wide range of particle multiplicities, to try and understand the early-time dynamics which lead to similar flow-like features in all of the systems. Another recent development is a program studying ultra-peripheral collisions, which provide gamma-gamma and photonuclear processes over a wide range of CM energy, to probe the nuclear wavefunction. This talk presents a subset of the the most recent results from the ATLAS experiment based on Run 1 and Run 2 data, including measurements of collectivity over a wide range of collision systems, potential nPDF modifications — using electroweak bosons,...

  7. Experience with the custom-developed ATLAS Offline Trigger Monitoring Framework and Reprocessing Infrastructure

    CERN Document Server

    Bartsch, V

    2012-01-01

    After about two years of data taking with the ATLAS detector manifold experience with the custom-developed trigger monitoring and reprocessing infrastructure could be collected. The trigger monitoring can be roughly divided into online and offline monitoring. The online monitoring calculates and displays all rates at every level of the trigger and evaluates up to 3000 data quality histograms. The physics analysis relevant data quality information is being checked and recorded automatically. The offline trigger monitoring provides information depending of the physics motivated different trigger streams after a run has finished. Experts are checking the information being guided by the assessment of algorithms checking the current histograms with a reference. The experts are recording their assessment in a so-called data quality defects which are used to select data for physics analysis. In the first half of 2011 about three percent of all data had an intolerable defect resulting from the ATLAS trigger system. T...

  8. The ATLAS DDM Tracer monitoring framework

    International Nuclear Information System (INIS)

    Zang Dongsong; Garonne, Vincent; Barisits, Martin; Lassnig, Mario; Andrew Stewart, Graeme; Molfetas, Angelos; Beermann, Thomas

    2012-01-01

    The DDM Tracer monitoring framework is aimed to trace and monitor the ATLAS file operations on the Worldwide LHC Computing Grid. The volume of traces has increased significantly since the framework was put in production in 2009. Now there are about 5 million trace messages every day and peaks can be near 250Hz, with peak rates continuing to climb, which gives the current structure a big challenge. Analysis of large datasets based on on-demand queries to the relational database management system (RDBMS), i.e. Oracle, can be problematic, and have a significant effect on the database's performance. Consequently, We have investigated some new high availability technologies like messaging infrastructure, specifically ActiveMQ, and key-value stores. The advantages of key value store technology are that they are distributed and have high scalability; also their write performances are usually much better than RDBMS, all of which are very useful for the Tracer monitoring framework. Indexes and distributed counters have been also tested to improve query performance and provided almost real time results. In this paper, the design principles, architecture and main characteristics of Tracer monitoring framework will be described and examples of its usage will be presented.

  9. Frameworks to monitor and predict rates and resource usage in the ATLAS High Level Trigger

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219969; The ATLAS collaboration

    2017-01-01

    The ATLAS High Level Trigger Farm consists of around 40,000 CPU cores which filter events at an input rate of up to 100 kHz. A costing framework is built into the high level trigger thus enabling detailed monitoring of the system and allowing for data-driven predictions to be made utilising specialist datasets. An overview is presented in to how ATLAS collects in-situ monitoring data on CPU usage during the trigger execution, and how these data are processed to yield both low level monitoring of individual selection-algorithms and high level data on the overall performance of the farm. For development and prediction purposes, ATLAS uses a special ‘Enhanced Bias’ event selection. This mechanism is explained along with how it is used to profile expected resource usage and output event rate of new physics selections, before they are executed on the actual high level trigger farm.

  10. Monitoring the US ATLAS Network Infrastructure with perfSONAR-PS

    International Nuclear Information System (INIS)

    McKee, Shawn; Lake, Andrew; Laurens, Philippe; Severini, Horst; Wlodek, Tomasz; Wolff, Stephen; Zurawski, Jason

    2012-01-01

    Global scientific collaborations, such as ATLAS, continue to push the network requirements envelope. Data movement in this collaboration is routinely including the regular exchange of petabytes of datasets between the collection and analysis facilities in the coming years. These requirements place a high emphasis on networks functioning at peak efficiency and availability; the lack thereof could mean critical delays in the overall scientific progress of distributed data-intensive experiments like ATLAS. Network operations staff routinely must deal with problems deep in the infrastructure; this may be as benign as replacing a failing piece of equipment, or as complex as dealing with a multi-domain path that is experiencing data loss. In either case, it is crucial that effective monitoring and performance analysis tools are available to ease the burden of management. We will report on our experiences deploying and using the perfSONAR-PS Performance Toolkit at ATLAS sites in the United States. This software creates a dedicated monitoring server, capable of collecting and performing a wide range of passive and active network measurements. Each independent instance is managed locally, but able to federate on a global scale; enabling a full view of the network infrastructure that spans domain boundaries. This information, available through web service interfaces, can easily be retrieved to create customized applications. The US ATLAS collaboration has developed a centralized “dashboard” offering network administrators, users, and decision makers the ability to see the performance of the network at a glance. The dashboard framework includes the ability to notify users (alarm) when problems are found, thus allowing rapid response to potential problems and making perfSONAR-PS crucial to the operation of our distributed computing infrastructure.

  11. Monitoring and Tracking the LHC Beam Spot within the ATLAS High Level Trigger

    CERN Document Server

    Winklmeier, F; The ATLAS collaboration

    2012-01-01

    The parameters of the beam spot produced by the LHC in the ATLAS interaction region are computed online using the ATLAS High Level Trigger (HLT) system. The high rate of triggered events is exploited to make precise measurements of the position, size and orientation of the luminous region in near real-time, as these parameters change significantly even during a single data-taking run. We present the challenges, solutions and results for the online determination, monitoring and beam spot feedback system in ATLAS. A specially designed algorithm, which uses tracks registered in the silicon detectors to reconstruct event vertices, is executed on the HLT processor farm of several thousand CPU cores. Monitoring histograms from all the cores are sampled and aggregated across the farm every 60 seconds. The reconstructed beam values are corrected for detector resolution effects, measured in situ from the separation of vertices whose tracks have been split into two collections. Furthermore, measurements for individual ...

  12. Monitoring the US ATLAS Network Infrastructure with perfSONAR-PS

    CERN Document Server

    McKee, S; The ATLAS collaboration; Laurens, P; Severini, H; Wlodek, T; Wolff, S; Zurawski, J

    2012-01-01

    Global scientific collaborations, such as ATLAS, continue to push the network requirements envelope. Data movement in this collaboration is routinely including the regular exchange of petabytes of datasets between the collection and analysis facilities in the coming years. These requirements place a high emphasis on networks functioning at peak efficiency and availability; the lack thereof could mean critical delays in the overall scientific progress of distributed data-intensive experiments like ATLAS. Network operations staff routinely must deal with problems deep in the infrastructure; this may be as benign as replacing a failing piece of equipment, or as complex as dealing with a multidomain path that is experiencing data loss. In either case, it is crucial that effective monitoring and performance analysis tools are available to ease the burden of management. We will report on our experiences deploying and using the perfSONAR-PS Performance Toolkit[8] at ATLAS sites in the United States. This software cr...

  13. Daily dose monitoring with atlas-based auto-segmentation on diagnostic quality CT for prostate cancer

    Energy Technology Data Exchange (ETDEWEB)

    Li, Wen; Vassil, Andrew; Xia, Ping [Department of Radiation Oncology, Cleveland Clinic Foundation, Cleveland, Ohio 44106 (United States); Zhong, Yahua [Department of Radiation Oncology, Zhongnan Hospital, Wuhan 430071 (China)

    2013-11-15

    Purpose: To evaluate the feasibility of daily dose monitoring using a patient specific atlas-based autosegmentation method on diagnostic quality verification images.Methods: Seven patients, who were treated for prostate cancer with intensity modulated radiotherapy under daily imaging guidance of a CT-on-rails system, were selected for this study. The prostate, rectum, and bladder were manually contoured on the first six and last seven sets of daily verification images. For each patient, three patient specific atlases were constructed using manual contours from planning CT alone (1-image atlas), planning CT plus first three verification CTs (4-image atlas), and planning CT plus first six verification CTs (7-image atlas). These atlases were subsequently applied to the last seven verification image sets of the same patient to generate the auto-contours. Daily dose was calculated by applying the original treatment plans to the daily beam isocenters. The autocontours and manual contours were compared geometrically using the dice similarity coefficient (DSC), and dosimetrically using the dose to 99% of the prostate CTV (D99) and the D5 of rectum and bladder.Results: The DSC of the autocontours obtained with the 4-image atlases were 87.0%± 3.3%, 84.7%± 8.6%, and 93.6%± 4.3% for the prostate, rectum, and bladder, respectively. These indices were higher than those from the 1-image atlases (p < 0.01) and comparable to those from the 7-image atlases (p > 0.05). Daily prostate D99 of the autocontours was comparable to those of the manual contours (p= 0.55). For the bladder and rectum, the daily D5 were 95.5%± 5.9% and 99.1%± 2.6% of the planned D5 for the autocontours compared to 95.3%± 6.7% (p= 0.58) and 99.8%± 2.3% (p < 0.01) for the manual contours.Conclusions: With patient specific 4-image atlases, atlas-based autosegmentation can adequately facilitate daily dose monitoring for prostate cancer.

  14. The implementation of full ATLAS detector simulation program

    International Nuclear Information System (INIS)

    Rimoldi, A.; Dell'Acqua, A.; Stavrianakou, M.; Amako, K.; Kanzaki, J.; Morita, Y.; Murakami, K.; Sasaki, T.; Saeki, T.; Ueda, I.; Tanaka, S.; Yoshida, H.

    2001-01-01

    The ATLAS detector is one of the most sophisticated and huge detectors ever designed up to now. A detailed, flexible and complete simulation program is needed in order to study the characteristics and possible problems of such a challenging apparatus and to answer to all raising questions in terms of physics, design optimization, etc. To cope with these needs the authors are implementing an application based on the simulation framework FADS/Goofy (Framework for ATLAS Detector Simulation /Geant4-based Object-Oriented Folly) in the Geant4 environment. The user's specific code implementation is presented in details for the different applications implemented until now, from the various components of the ATLAS spectrometer to some particular testbeam facilities. Particular emphasis is put in describing the simulation of the Muon Spectrometer and its subsystems as a test case for the implementation of the whole detector simulation program: the intrinsic complexity in the geometry description of the Muon System is one of the more demanding problems that are faced. The magnetic field handling, the physics impact in the event processing in presence of backgrounds from different sources and the implementation of different possible generators (including Pythia) are also discussed

  15. The LUCID detector ATLAS luminosity monitor and its electronic system

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00378808; The ATLAS collaboration

    2016-01-01

    Starting from 2015 LHC is performing a new run, at higher center of mass energy (13 TeV) and with 25 ns bunch-spacing. The ATLAS luminosity monitor LUCID has been completely renewed, both on detector design and in the electronics, in order to cope with the new running conditions. The new detector electronics is presented, featuring a new read-out board (LUCROD), for signal acquisition and digitization, PMT-charge integration and single-side luminosity measurements, and the revisited LUMAT board for side-A-side-C combination. The contribution covers the new boards design, the firmware and software developments, the implementation of luminosity algorithms, the optical communication between boards and the integration into the ATLAS TDAQ system.

  16. Daily dose monitoring with atlas-based auto-segmentation on diagnostic quality CT for prostate cancer

    International Nuclear Information System (INIS)

    Li, Wen; Vassil, Andrew; Xia, Ping; Zhong, Yahua

    2013-01-01

    Purpose: To evaluate the feasibility of daily dose monitoring using a patient specific atlas-based autosegmentation method on diagnostic quality verification images.Methods: Seven patients, who were treated for prostate cancer with intensity modulated radiotherapy under daily imaging guidance of a CT-on-rails system, were selected for this study. The prostate, rectum, and bladder were manually contoured on the first six and last seven sets of daily verification images. For each patient, three patient specific atlases were constructed using manual contours from planning CT alone (1-image atlas), planning CT plus first three verification CTs (4-image atlas), and planning CT plus first six verification CTs (7-image atlas). These atlases were subsequently applied to the last seven verification image sets of the same patient to generate the auto-contours. Daily dose was calculated by applying the original treatment plans to the daily beam isocenters. The autocontours and manual contours were compared geometrically using the dice similarity coefficient (DSC), and dosimetrically using the dose to 99% of the prostate CTV (D99) and the D5 of rectum and bladder.Results: The DSC of the autocontours obtained with the 4-image atlases were 87.0%± 3.3%, 84.7%± 8.6%, and 93.6%± 4.3% for the prostate, rectum, and bladder, respectively. These indices were higher than those from the 1-image atlases (p 0.05). Daily prostate D99 of the autocontours was comparable to those of the manual contours (p= 0.55). For the bladder and rectum, the daily D5 were 95.5%± 5.9% and 99.1%± 2.6% of the planned D5 for the autocontours compared to 95.3%± 6.7% (p= 0.58) and 99.8%± 2.3% (p < 0.01) for the manual contours.Conclusions: With patient specific 4-image atlases, atlas-based autosegmentation can adequately facilitate daily dose monitoring for prostate cancer

  17. Calibration and monitoring of the ATLAS Tile calorimeter

    CERN Document Server

    Boumediene, Djamel Eddine; The ATLAS collaboration

    2017-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs). PMT signals are then digitized at 40~MHz and stored on detector and are only transferred off detector once the first level trigger acceptance has been confirmed. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain, a set of calibration systems is used. The TileCal calibration system comprises Cesium radioactive sources, laser, charge injection elements and an integrator b...

  18. Monitoring and controlling ATLAS data management: The Rucio web user interface

    OpenAIRE

    Lassnig, Mario; Beermann, Thomas Alfons; Vigne, Ralph; Barisits, Martin-Stefan; Garonne, Vincent; Serfon, Cedric

    2015-01-01

    The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new data management system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component, and allows easy integration for user-generated views. The interface follows three des...

  19. Intelligent monitoring and fault diagnosis for ATLAS TDAQ: a complex event processing solution

    CERN Document Server

    Magnoni, Luca; Luppi, Eleonora

    Effective monitoring and analysis tools are fundamental in modern IT infrastructures to get insights on the overall system behavior and to deal promptly and effectively with failures. In recent years, Complex Event Processing (CEP) technologies have emerged as effective solutions for information processing from the most disparate fields: from wireless sensor networks to financial analysis. This thesis proposes an innovative approach to monitor and operate complex and distributed computing systems, in particular referring to the ATLAS Trigger and Data Acquisition (TDAQ) system currently in use at the European Organization for Nuclear Research (CERN). The result of this research, the AAL project, is currently used to provide ATLAS data acquisition operators with automated error detection and intelligent system analysis. The thesis begins by describing the TDAQ system and the controlling architecture, with a focus on the monitoring infrastructure and the expert system used for error detection and automated reco...

  20. Processing and Quality Monitoring for the ATLAS Tile Hadronic Calorimeter Data

    Science.gov (United States)

    Burghgrave, Blake; ATLAS Collaboration

    2017-10-01

    An overview is presented of Data Processing and Data Quality (DQ) Monitoring for the ATLAS Tile Hadronic Calorimeter. Calibration runs are monitored from a data quality perspective and used as a cross-check for physics runs. Data quality in physics runs is monitored extensively and continuously. Any problems are reported and immediately investigated. The DQ efficiency achieved was 99.6% in 2012 and 100% in 2015, after the detector maintenance in 2013-2014. Changes to detector status or calibrations are entered into the conditions database (DB) during a brief calibration loop between the end of a run and the beginning of bulk processing of data collected in it. Bulk processed data are reviewed and certified for the ATLAS Good Run List if no problem is detected. Experts maintain the tools used by DQ shifters and the calibration teams during normal operation, and prepare new conditions for data reprocessing and Monte Carlo (MC) production campaigns. Conditions data are stored in 3 databases: Online DB, Offline DB for data and a special DB for Monte Carlo. Database updates can be performed through a custom-made web interface.

  1. The ATLAS PanDA Monitoring System and its Evolution

    Science.gov (United States)

    Klimentov, A.; Nevski, P.; Potekhin, M.; Wenaus, T.

    2011-12-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.

  2. The ATLAS PanDA Monitoring System and its Evolution

    International Nuclear Information System (INIS)

    Klimentov, A; Nevski, P; Wenaus, T; Potekhin, M

    2011-01-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.

  3. CAMAC-based intelligent subsystem for ATLAS example application: cryogenic monitoring and control

    International Nuclear Information System (INIS)

    Pardo, R.; Kawarasaki, Y.; Wasniewski, K.

    1985-01-01

    A subunit of the CAMAC accelerator control system of ATLAS for monitoring and, eventually, controlling the cryogenic refrigeration and distribution facility is under development. This development is the first application of a philosophy of distributed intelligence which will be applied throughout the ATLAS control system. The control concept is that of an intelligent subunit of the existing ATLAS CAMAC control highway. A single board computer resides in an auxiliary crate controller which allows access to all devices within the crate. The local SBC can communicate to the host over the CAMAC highway via a protocol involving the use of memory in the SBC which can be accessed from the host in a DMA mode. This provides a mechanism for global communications, such as for alarm conditions, as well as allowing the cryogenic system to respond to the demands of the accelerator system

  4. Monitoring the tracking performance of the ATLAS trigger for electrons in Z->ee decays

    CERN Document Server

    Langford, Jonathon

    2016-01-01

    This project was carried out to develop an algorithm which monitors the performance of the tracking system in the ATLAS trigger. The algorithm uses tag and probe methods to measure the efficiency of the tracking for electrons by looking at Z → ee candidates. Once this method is validated, the ultimate goal is to implement the algorithm into the High-Level-Trigger (HLT) of ATLAS whilst online. The advantage of this technique over traditional offline monitoring is continuous feedback during data taking and higher available statistics. In this report the results of an offline analysis are presented, showing electron tracking efficiencies between 96% and 99% across almost all regions of the inner detector (run 306278).

  5. Monitoring and data quality assessment of the ATLAS liquid argon calorimeter

    Czech Academy of Sciences Publication Activity Database

    Aad, G.; Abajyan, T.; Abbott, B.; Böhm, Jan; Chudoba, Jiří; Havránek, Miroslav; Hejbal, Jiří; Jakoubek, Tomáš; Kepka, Oldřich; Kupčo, Alexander; Kůs, Vlastimil; Lokajíček, Miloš; Lysák, Roman; Marčišovský, Michal; Mikeštíková, Marcela; Myška, Miroslav; Němeček, Stanislav; Šícho, Petr; Staroba, Pavel; Svatoš, Michal; Taševský, Marek; Vrba, Václav

    2014-01-01

    Roč. 9, Jul (2014), s. 1-39 ISSN 1748-0221 R&D Projects: GA MŠk(CZ) LG13009 Institutional support: RVO:68378271 Keywords : missing-energy * data acquisition * ATLAS * CERN LHC Coll * monitoring performance Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 1.399, year: 2014

  6. Application of the ATLAS DAQ and Monitoring System for MDT and RPC Commissioning

    CERN Document Server

    Pasqualucci, E

    2007-01-01

    The ATLAS DAQ and monitoring software are currently commonly used to test detectors during the commissioning phase. In this paper, their usage in MDT and RPC commissioning is described, both at the surface pre-commissioning and commissioning stations and in the ATLAS pit. Two main components are heavily used for detector tests. The ROD Crate DAQ software is based on the ATLAS Readout application. Based on the plug-in mechanism, it provides a complete environment to interface any kind of detector or trigger electronics to the ATLAS DAQ system. All the possible flavours of this application are used to test and run the MDT and RPC detectors at the pre-commissioning and commissioning sites. Ad-hoc plug-ins have been developed to implement data readout via VME, both with ROD prototypes and emulating final electronics to read out data with temporary solutions, and to provide trigger distribution and busy management in a multi-crate environment. Data driven event building functionality is also used to combine data f...

  7. Integrated monitoring of the ATLAS online computing farm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00389536; The ATLAS collaboration; Brasolin, Franco; Fazio, Daniel; Gament, Costin-Eugen; Lee, Christopher; Scannicchio, Diana; Twomey, Matthew Shaun

    2017-01-01

    The online farm of the ATLAS experiment at the LHC, consisting of nearly 4100 PCs with various characteristics, provides configuration and control of the detector and performs the collection, processing, selection and conveyance of event data from the front-end electronics to mass storage. The status and health of every host must be constantly monitored to ensure the correct and reliable operation of the whole online system. This is the first line of defense, which should not only promptly provide alerts in case of failure but, whenever possible, warn of impending issues. The monitoring system should be able to check up to 100000 health parameters and provide alerts on a selected subset. In this paper we present the implementation and validation of our new monitoring and alerting system based on Icinga 2 and Ganglia. We describe how the load distribution and high availability features of Icinga 2 allowed us to have a centralised but scalable system, with a configuration model that allows full flexibility whil...

  8. Integrated monitoring of the ATLAS online computing farm

    CERN Document Server

    Ballestrero, Sergio; The ATLAS collaboration; Fazio, Daniel; Gament, Costin-Eugen; Lee, Christopher; Scannicchio, Diana; Twomey, Matthew Shaun

    2016-01-01

    The online farm of the ATLAS experiment at the LHC, consisting of nearly 4000 PCs with various characteristics, provides configuration and control of the detector and performs the collection, processing, selection and conveyance of event data from the front-end electronics to mass storage. The status and health of every host must be constantly monitored to ensure the correct and reliable operation of the whole online system. This is the first line of defense, which should not only promptly provide alerts in case of failure but, whenever possible, warn of impending issues. The monitoring system should be able to check up to 100000 health parameters and provide alerts on a selected subset. In this paper we present the implementation and validation of our new monitoring and alerting system based on Icinga 2 and Ganglia. We describe how the load distribution and high availability features of Icinga 2 allowed us to have a centralised but scalable system, with a configuration model that allows full flexibility whil...

  9. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  10. Improved ATLAS HammerCloud Monitoring for local Site Administration

    CERN Document Server

    Boehler, Michael; The ATLAS collaboration; Hoenig, Friedrich; Legger, Federica; Sciacca, Francesco Giovanni; Mancinelli, Valentina

    2015-01-01

    Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, and CMS experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure. Sites with failing functionality tests are auto-excluded from the ATLAS computing grid, therefore it is essential to provide a detailed and well organized web interface for the local site administrators such that they can easily spot and promptly solve site issues. Additional functionality has been developed to extract and visualize the most relevant information. The site administrators can now be pointed easily to major site issues which lead to site blacklisting as well as possible minor issues that are usually not conspicuous enough to warrant the blacklisting of a specific site, but can still cause undesired effects such as a non-negligible job failure rate. This paper summarizes the different developments and optimiz...

  11. Improved ATLAS HammerCloud Monitoring for local Site Administration

    CERN Document Server

    Boehler, Michael; The ATLAS collaboration; Hoenig, Friedrich; Legger, Federica

    2015-01-01

    Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, CMS, and LHCb experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure. Sites with failing functionality tests are auto-excluded from the ATLAS computing grid, therefore it is essential to provide a detailed and well organized web interface for the local site administrators such that they can easily spot and promptly solve site issues. Additional functionalities have been developed to extract and visualize the most relevant information. The site administrators can now be pointed easily to major site issues which lead to site blacklisting as well as possible minor issues that are usually not conspicuous enough to warrant the blacklisting of a specific site, but can still cause undesired effects such as a non-negligible job failure rate. This contribution summarizes the different developm...

  12. Monitoring and data quality assessment of the ATLAS liquid argon calorimeter

    CERN Document Server

    Aad, Georges; Abbott, Brad; Abdallah, Jalal; Abdel Khalek, Samah; Abdinov, Ovsat; Aben, Rosemarie; Abi, Babak; Abolins, Maris; AbouZeid, Ossama; Abramowicz, Halina; Abreu, Henso; Abulaiti, Yiming; Acharya, Bobby Samir; Adamczyk, Leszek; Adams, David; Addy, Tetteh; Adelman, Jahred; Adomeit, Stefanie; Adye, Tim; Agatonovic-Jovin, Tatjana; Aguilar-Saavedra, Juan Antonio; Agustoni, Marco; Ahlen, Steven; Ahmadov, Faig; Aielli, Giulio; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov, Andrei; Albert, Justin; Albrand, Solveig; Alconada Verzini, Maria Josefina; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexopoulos, Theodoros; Alhroob, Muhammad; Alimonti, Gianluca; Alio, Lion; Alison, John; Allbrooke, Benedict; Allison, Lee John; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alonso, Francisco; Alpigiani, Cristiano; Altheimer, Andrew David; Alvarez Gonzalez, Barbara; Alviggi, Mariagrazia; Amako, Katsuya; Amaral Coutinho, Yara; Amelung, Christoph; Ammosov, Vladimir; Amor Dos Santos, Susana Patricia; Amorim, Antonio; Amoroso, Simone; Amram, Nir; Amundsen, Glenn; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Anduaga, Xabier; Angelidakis, Stylianos; Anger, Philipp; Angerami, Aaron; Anghinolfi, Francis; Anisenkov, Alexey; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoki, Masato; Aperio Bella, Ludovica; Apolle, Rudi; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Araque, Juan Pedro; Arce, Ayana; Arguin, Jean-Francois; Argyropoulos, Spyridon; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnal, Vanessa; Arslan, Ozan; Artamonov, Andrei; Artoni, Giacomo; Asai, Shoji; Asbah, Nedaa; Ashkenazi, Adi; Ask, Stefan; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astalos, Robert; Atkinson, Markus; Atlay, Naim Bora; Auerbach, Benjamin; Auge, Etienne; Augsten, Kamil; Aurousseau, Mathieu; Avolio, Giuseppe; Azuelos, Georges; Azuma, Yuya; Baak, Max; Bacci, Cesare; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Backhaus, Malte; Backus Mayes, John; Badescu, Elisabeta; Bagiacchi, Paolo; Bagnaia, Paolo; Bai, Yu; Bailey, David; Bain, Travis; Baines, John; Baker, Oliver Keith; Baker, Sarah; Balek, Petr; Balli, Fabrice; Banas, Elzbieta; Banerjee, Swagato; Bangert, Andrea Michelle; Bannoura, Arwa A E; Bansal, Vikas; Bansil, Hardeep Singh; Barak, Liron; Baranov, Sergei; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Barnovska, Zuzana; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Bartoldus, Rainer; Barton, Adam Edward; Bartos, Pavol; Bartsch, Valeria; Bassalat, Ahmed; Basye, Austin; Bates, Richard; Batkova, Lucia; Batley, Richard; Battistin, Michele; Bauer, Florian; Bawa, Harinder Singh; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Bechtle, Philip; Beck, Hans Peter; Becker, Anne Kathrin; Becker, Sebastian; Beckingham, Matthew; Becot, Cyril; Beddall, Andrew; Beddall, Ayda; Bedikian, Sourpouhi; Bednyakov, Vadim; Bee, Christopher; Beemster, Lars; Beermann, Thomas; Begel, Michael; Behr, Katharina; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellerive, Alain; Bellomo, Massimiliano; Belloni, Alberto; Belotskiy, Konstantin; Beltramello, Olga; Benary, Odette; Benchekroun, Driss; Bendtz, Katarina; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez Garcia, Jorge-Armando; Benjamin, Douglas; Bensinger, James; Benslama, Kamal; Bentvelsen, Stan; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jürg; Bernard, Clare; Bernat, Pauline; Bernius, Catrin; Bernlochner, Florian Urs; Berry, Tracey; Berta, Peter; Bertella, Claudia; Bertolucci, Federico; Besana, Maria Ilaria; Besjes, Geert-Jan; Bessidskaia, Olga; Besson, Nathalie; Betancourt, Christopher; Bethke, Siegfried; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianchini, Louis; Bianco, Michele; Biebel, Otmar; Bieniek, Stephen Paul; Bierwagen, Katharina; Biesiada, Jed; Biglietti, Michela; Bilbao De Mendizabal, Javier; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Black, Curtis; Black, James; Black, Kevin; Blackburn, Daniel; Blair, Robert; Blanchard, Jean-Baptiste; Blazek, Tomas; Bloch, Ingo; Blocker, Craig; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Boddy, Christopher Richard; Boehler, Michael; Boek, Jennifer; Boek, Thorsten Tobias; Bogaerts, Joannes Andreas; Bogdanchikov, Alexander; Bogouch, Andrei; Bohm, Christian; Bohm, Jan; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Boldyrev, Alexey; Bolnet, Nayanka Myriam; Bomben, Marco; Bona, Marcella; Boonekamp, Maarten; Borisov, Anatoly; Borissov, Guennadi; Borri, Marcello; Borroni, Sara; Bortfeldt, Jonathan; Bortolotto, Valerio; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Boudreau, Joseph; Bouffard, Julian; Bouhova-Thacker, Evelina Vassileva; Boumediene, Djamel Eddine; Bourdarios, Claire; Bousson, Nicolas; Boutouil, Sara; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozovic-Jelisavcic, Ivanka; Bracinik, Juraj; Branchini, Paolo; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Brazzale, Simone Federico; Brelier, Bertrand; Brendlinger, Kurt; Brennan, Amelia Jean; Brenner, Richard; Bressler, Shikma; Bristow, Kieran; Bristow, Timothy Michael; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Bromberg, Carl; Bronner, Johanna; Brooijmans, Gustaaf; Brooks, Timothy; Brooks, William; Brosamer, Jacquelyn; Brost, Elizabeth; Brown, Gareth; Brown, Jonathan; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Bryngemark, Lene; Buanes, Trygve; Buat, Quentin; Bucci, Francesca; Buchholz, Peter; Buckingham, Ryan; Buckley, Andrew; Buda, Stelian Ioan; Budagov, Ioulian; Buehrer, Felix; Bugge, Lars; Bugge, Magnar Kopangen; Bulekov, Oleg; Bundock, Aaron Colin; Burckhart, Helfried; Burdin, Sergey; Burghgrave, Blake; Burke, Stephen; Burmeister, Ingo; Busato, Emmanuel; Büscher, Volker; Bussey, Peter; Buszello, Claus-Peter; Butler, Bart; Butler, John; Butt, Aatif Imtiaz; Buttar, Craig; Butterworth, Jonathan; Butti, Pierfrancesco; Buttinger, William; Buzatu, Adrian; Byszewski, Marcin; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Calvet, David; Calvet, Samuel; Camacho Toro, Reina; Cameron, David; Caminada, Lea Michaela; Caminal Armadans, Roger; Campana, Simone; Campanelli, Mario; Campoverde, Angel; Canale, Vincenzo; Canepa, Anadi; Cantero, Josu; Cantrill, Robert; Cao, Tingting; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Caputo, Regina; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Sascha; Carquin, Edson; Carrillo-Montoya, German D; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Castaneda-Miranda, Elizabeth; Castelli, Angelantonio; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Catastini, Pierluigi; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cavaliere, Viviana; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerio, Benjamin; Cerny, Karel; Santiago Cerqueira, Augusto; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cerv, Matevz; Cervelli, Alberto; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chalupkova, Ina; Chan, Kevin; Chang, Philip; Chapleau, Bertrand; Chapman, John Derek; Charfeddine, Driss; Charlton, Dave; Chau, Chav Chhiv; Chavez Barajas, Carlos Alberto; Cheatham, Susan; Chegwidden, Andrew; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Karen; Chen, Liming; Chen, Shenjian; Chen, Xin; Chen, Yujiao; Cheng, Hok Chuen; Cheng, Yangyang; Cheplakov, Alexander; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Chevalier, Laurent; Chiarella, Vitaliano; Chiefari, Giovanni; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chisholm, Andrew; Chislett, Rebecca Thalatta; Chitan, Adrian; Chizhov, Mihail; Chouridou, Sofia; Chow, Bonnie Kar Bo; Christidi, Ilektra-Athanasia; Chromek-Burckhart, Doris; Chu, Ming-Lee; Chudoba, Jiri; Chytka, Ladislav; Ciapetti, Guido; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciocio, Alessandra; Cirkovic, Predrag; Citron, Zvi Hirsh; Citterio, Mauro; Ciubancan, Mihai; Clark, Allan G; Clark, Philip James; Clarke, Robert; Cleland, Bill; Clemens, Jean-Claude; Clement, Benoit; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coffey, Laurel; Cogan, Joshua Godfrey; Coggeshall, James; Cole, Brian; Cole, Stephen; Colijn, Auke-Pieter; Collins-Tooth, Christopher; Collot, Johann; Colombo, Tommaso; Colon, German; Compostella, Gabriele; Conde Muiño, Patricia; Coniavitis, Elias; Conidi, Maria Chiara; Connell, Simon Henry; Connelly, Ian; Consonni, Sofia Maria; Consorti, Valerio; Constantinescu, Serban; Conta, Claudio; Conti, Geraldine; Conventi, Francesco; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cooper-Smith, Neil; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Corso-Radu, Alina; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Côté, David; Cottin, Giovanna; Cowan, Glen; Cox, Brian; Cranmer, Kyle; Cree, Graham; Crépé-Renaudin, Sabine; Crescioli, Francesco; Crispin Ortuzar, Mireia; Cristinziani, Markus; Crosetti, Giovanni; Cuciuc, Constantin-Mihai; Cuhadar Donszelmann, Tulay; Cummings, Jane; Curatolo, Maria; Cuthbert, Cameron; Czirr, Hendrik; Czodrowski, Patrick; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; Da Cunha Sargedas De Sousa, Mario Jose; Da Via, Cinzia; Dabrowski, Wladyslaw; Dafinca, Alexandru; Dai, Tiesheng; Dale, Orjan; Dallaire, Frederick; Dallapiccola, Carlo; Dam, Mogens; Daniells, Andrew Christopher; Dano Hoffmann, Maria; Dao, Valerio; Darbo, Giovanni; Darlea, Georgiana Lavinia; Darmora, Smita; Dassoulas, James; Davey, Will; David, Claire; Davidek, Tomas; Davies, Eleanor; Davies, Merlin; Davignon, Olivier; Davison, Adam; Davison, Peter; Davygora, Yuriy; Dawe, Edmund; Dawson, Ian; Daya-Ishmukhametova, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Cecco, Sandro; de Graat, Julien; De Groot, Nicolo; de Jong, Paul; De La Taille, Christophe; De la Torre, Hector; De Lorenzi, Francesco; De Nooij, Lucie; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; De Zorzi, Guido; Dearnaley, William James; Debbe, Ramiro; Debenedetti, Chiara; Dechenaux, Benjamin; Dedovich, Dmitri; Degenhardt, James; Deigaard, Ingrid; Del Peso, Jose; Del Prete, Tarcisio; Deliot, Frederic; Deliyergiyev, Maksym; Dell'Acqua, Andrea; Dell'Asta, Lidia; Dell'Orso, Mauro; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demilly, Aurelien; Denisov, Sergey; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deterre, Cecile; Deviveiros, Pier-Olivier; Dewhurst, Alastair; Dhaliwal, Saminder; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Domenico, Antonio; Di Donato, Camilla; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Mattia, Alessandro; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Di Valentino, David; Diaz, Marco Aurelio; Diehl, Edward; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Dimitrievska, Aleksandra; Dingfelder, Jochen; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; Barros do Vale, Maria Aline; Do Valle Wemans, André; Doan, Thi Kieu Oanh; Dobos, Daniel; Dobson, Ellie; Doglioni, Caterina; Doherty, Tom; Dohmae, Takeshi; Dolejsi, Jiri; Dolezal, Zdenek; Dolgoshein, Boris; Donadelli, Marisilvia; Donati, Simone; Dondero, Paolo; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dova, Maria-Teresa; Doyle, Tony; Dris, Manolis; Dubbert, Jörg; Dube, Sourabh; Dubreuil, Emmanuelle; Duchovni, Ehud; Duckeck, Guenter; Ducu, Otilia Anamaria; Duda, Dominik; Dudarev, Alexey; Dudziak, Fanny; Duflot, Laurent; Duguid, Liam; Dührssen, Michael; Dunford, Monica; Duran Yildiz, Hatice; Düren, Michael; Durglishvili, Archil; Dwuznik, Michal; Dyndal, Mateusz; Ebke, Johannes; Edson, William; Edwards, Nicholas Charles; Ehrenfeld, Wolfgang; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Enari, Yuji; Endner, Oliver Chris; Endo, Masaki; Engelmann, Roderich; Erdmann, Johannes; Ereditato, Antonio; Eriksson, Daniel; Ernis, Gunar; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Esch, Hendrik; Escobar, Carlos; Esposito, Bellisario; Etienvre, Anne-Isabelle; Etzion, Erez; Evans, Hal; Fabbri, Laura; Facini, Gabriel; Fakhrutdinov, Rinat; Falciano, Speranza; Faltova, Jana; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farooque, Trisha; Farrell, Steven; Farrington, Sinead; Farthouat, Philippe; Fassi, Farida; Fassnacht, Patrick; Fassouliotis, Dimitrios; Favareto, Andrea; Fayard, Louis; Federic, Pavol; Fedin, Oleg; Fedorko, Wojciech; Fehling-Kaschek, Mirjam; Feigl, Simon; Feligioni, Lorenzo; Feng, Cunfeng; Feng, Eric; Feng, Haolu; Fenyuk, Alexander; Fernandez Perez, Sonia; Fernando, Waruna; Ferrag, Samir; Ferrando, James; Ferrara, Valentina; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferreira de Lima, Danilo Enoque; Ferrer, Antonio; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filipuzzi, Marco; Filthaut, Frank; Fincke-Keeler, Margret; Finelli, Kevin Daniel; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Julia; Fisher, Matthew; Fisher, Wade Cameron; Fitzgerald, Eric Andrew; Flechl, Martin; Fleck, Ivor; Fleischmann, Philipp; Fleischmann, Sebastian; Fletcher, Gareth Thomas; Fletcher, Gregory; Flick, Tobias; Floderus, Anders; Flores Castillo, Luis; Florez Bustos, Andres Carlos; Flowerdew, Michael; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Fournier, Daniel; Fox, Harald; Fracchia, Silvia; Francavilla, Paolo; Franchini, Matteo; Franchino, Silvia; Francis, David; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; French, Sky; Friedrich, Conrad; Friedrich, Felix; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fulsom, Bryan Gregory; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gabrielli, Alessandro; Gabrielli, Andrea; Gadatsch, Stefan; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Galhardo, Bruno; Gallas, Elizabeth; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Galster, Gorm Aske Gram Krohn; Gan, KK; Gandrajula, Reddy Pratap; Gao, Jun; Gao, Yongsheng; Garay Walls, Francisca; Garberson, Ford; García, Carmen; García Navarro, José Enrique; Garcia-Sciveres, Maurice; Gardner, Robert; Garelli, Nicoletta; Garonne, Vincent; Gatti, Claudio; Gaudio, Gabriella; Gaur, Bakul; Gauthier, Lea; Gauzzi, Paolo; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Ge, Peng; Gecse, Zoltan; Gee, Norman; Geerts, Daniël Alphonsus Adrianus; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Gemmell, Alistair; Genest, Marie-Hélène; Gentile, Simonetta; George, Matthias; George, Simon; Gerbaudo, Davide; Gershon, Avi; Ghazlane, Hamid; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giangiobbe, Vincent; Giannetti, Paola; Gianotti, Fabiola; Gibbard, Bruce; Gibson, Stephen; Gilchriese, Murdock; Gillam, Thomas; Gillberg, Dag; Gingrich, Douglas; Giokaris, Nikos; Giordani, MarioPaolo; Giordano, Raffaele; Giorgi, Francesco Michelangelo; Giraud, Pierre-Francois; Giugni, Danilo; Giuliani, Claudia; Giulini, Maddalena; Giunta, Michele; Gjelsten, Børge Kile; Gkialas, Ioannis; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glaysher, Paul; Glazov, Alexandre; Glonti, George; Goblirsch-Kolb, Maximilian; Goddard, Jack Robert; Godfrey, Jennifer; Godlewski, Jan; Goeringer, Christian; Goldfarb, Steven; Golling, Tobias; Golubkov, Dmitry; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; González de la Hoz, Santiago; Gonzalez Parra, Garoe; Gonzalez Silva, Laura; Gonzalez-Sevilla, Sergio; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Goshaw, Alfred; Gössling, Claus; Gostkin, Mikhail Ivanovitch; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Gozpinar, Serdar; Grabas, Herve Marie Xavier; Graber, Lars; Grabowska-Bold, Iwona; Grafström, Per; Grahn, Karl-Johan; Gramling, Johanna; Gramstad, Eirik; Grancagnolo, Francesco; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Gray, Heather; Graziani, Enrico; Grebenyuk, Oleg; Greenwood, Zeno Dixon; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Griffiths, Justin; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Gris, Philippe Luc Yves; Grishkevich, Yaroslav; Grivaz, Jean-Francois; Grohs, Johannes Philipp; Grohsjean, Alexander; Gross, Eilam; Grosse-Knetter, Joern; Grossi, Giulio Cornelio; Groth-Jensen, Jacob; Grout, Zara Jane; Grybel, Kai; Guan, Liang; Guescini, Francesco; Guest, Daniel; Gueta, Orel; Guicheney, Christophe; Guido, Elisa; Guillemin, Thibault; Guindon, Stefan; Gul, Umar; Gumpert, Christian; Gunther, Jaroslav; Guo, Jun; Gupta, Shaun; Gutierrez, Phillip; Gutierrez Ortiz, Nicolas Gilberto; Gutschow, Christian; Guttman, Nir; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haber, Carl; Hadavand, Haleh Khani; Haddad, Nacim; Haefner, Petra; Hageboeck, Stephan; Hajduk, Zbigniew; Hakobyan, Hrachya; Haleem, Mahsana; Hall, David; Halladjian, Garabed; Hamacher, Klaus; Hamal, Petr; Hamano, Kenji; Hamer, Matthias; Hamilton, Andrew; Hamilton, Samuel; Hamnett, Phillip George; Han, Liang; Hanagaki, Kazunori; Hanawa, Keita; Hance, Michael; Hanke, Paul; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Peter Henrik; Hara, Kazuhiko; Hard, Andrew; Harenberg, Torsten; Harkusha, Siarhei; Harper, Devin; Harrington, Robert; Harris, Orin; Harrison, Paul Fraser; Hartjes, Fred; Harvey, Alex; Hasegawa, Satoshi; Hasegawa, Yoji; Hasib, A; Hassani, Samira; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Anthony David; Hayashi, Takayasu; Hayden, Daniel; Hays, Chris; Hayward, Helen; Haywood, Stephen; Head, Simon; Heck, Tobias; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heim, Timon; Heinemann, Beate; Heinrich, Lukas; Heisterkamp, Simon; Hejbal, Jiri; Helary, Louis; Heller, Claudio; Heller, Matthieu; Hellman, Sten; Hellmich, Dennis; Helsens, Clement; Henderson, James; Henderson, Robert; Hengler, Christopher; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Hensel, Carsten; Herbert, Geoffrey Henry; Hernández Jiménez, Yesenia; Herrberg-Schubert, Ruth; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hesketh, Gavin Grant; Hessey, Nigel; Hickling, Robert; Higón-Rodriguez, Emilio; Hill, John; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hofmann, Julia Isabell; Hohlfeld, Marc; Holmes, Tova Ray; Hong, Tae Min; Hooft van Huysduynen, Loek; Hostachy, Jean-Yves; Hou, Suen; Hoummada, Abdeslam; Howard, Jacob; Howarth, James; Hrabovsky, Miroslav; Hristova, Ivana; Hrivnac, Julius; Hryn'ova, Tetiana; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Hu, Diedi; Hu, Xueye; Huang, Yanping; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Huhtinen, Mika; Hülsing, Tobias Alexander; Hurwitz, Martina; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Ideal, Emma; Iengo, Paolo; Igonkina, Olga; Iizawa, Tomoya; Ikegami, Yoichi; Ikematsu, Katsumasa; Ikeno, Masahiro; Iliadis, Dimitrios; Ilic, Nikolina; Inamaru, Yuki; Ince, Tayfun; Ioannou, Pavlos; Iodice, Mauro; Iordanidou, Kalliopi; Ippolito, Valerio; Irles Quiles, Adrian; Isaksson, Charlie; Ishino, Masaya; Ishitsuka, Masaki; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Iturbe Ponce, Julia Mariana; Ivashin, Anton; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, John; Jackson, Matthew; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobs, Karl; Jakobsen, Sune; Jakoubek, Tomas; Jakubek, Jan; Jamin, David Olivier; Jana, Dilip; Jansen, Eric; Jansen, Hendrik; Janssen, Jens; Janus, Michel; Jarlskog, Göran; Javůrek, Tomáš; Jeanty, Laura; Jeng, Geng-yuan; Jen-La Plante, Imai; Jennens, David; Jenni, Peter; Jentzsch, Jennifer; Jeske, Carl; Jézéquel, Stéphane; Ji, Haoshuang; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Shan; Jinaru, Adam; Jinnouchi, Osamu; Joergensen, Morten Dam; Johansson, Erik; Johansson, Per; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tim; Jongmanns, Jan; Jorge, Pedro; Joshi, Kiran Daniel; Jovicevic, Jelena; Ju, Xiangyang; Jung, Christian; Jungst, Ralph Markus; Jussel, Patrick; Juste Rozas, Aurelio; Kaci, Mohammed; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kajomovitz, Enrique; Kama, Sami; Kanaya, Naoko; Kaneda, Michiru; Kaneti, Steven; Kanno, Takayuki; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kar, Deepak; Karakostas, Konstantinos; Karastathis, Nikolaos; Karnevskiy, Mikhail; Karpov, Sergey; Karthik, Krishnaiyengar; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasieczka, Gregor; Kass, Richard; Kastanas, Alex; Kataoka, Yousuke; Katre, Akshay; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kazama, Shingo; Kazanin, Vassili; Kazarinov, Makhail; Keeler, Richard; Kehoe, Robert; Keil, Markus; Keller, John; Keoshkerian, Houry; Kepka, Oldrich; Kerševan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Keung, Justin; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Khodinov, Alexander; Khomich, Andrei; Khoo, Teng Jian; Khoriauli, Gia; Khoroshilov, Andrey; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kim, Hee Yeun; Kim, Hyeon Jin; Kim, Shinhong; Kimura, Naoki; Kind, Oliver; King, Barry; King, Matthew; King, Robert Steven Beaufoy; King, Samuel Burton; Kirk, Julie; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kiss, Florian; Kitamura, Takumi; Kittelmann, Thomas; Kiuchi, Kenji; Kladiva, Eduard; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klimek, Pawel; Klimentov, Alexei; Klingenberg, Reiner; Klinger, Joel Alexander; Klinkby, Esben; Klioutchnikova, Tatiana; Klok, Peter; Kluge, Eike-Erik; Kluit, Peter; Kluth, Stefan; Kneringer, Emmerich; Knoops, Edith; Knue, Andrea; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kodys, Peter; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kogan, Lucy Anne; Kohlmann, Simon; Kohout, Zdenek; Kohriki, Takashi; Koi, Tatsumi; Kolanoski, Hermann; Koletsou, Iro; Koll, James; Komar, Aston; Komori, Yuto; Kondo, Takahiko; Köneke, Karsten; König, Adriaan; König, Sebastian; Kono, Takanori; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kopeliansky, Revital; Koperny, Stefan; Köpke, Lutz; Kopp, Anna Katharina; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostyukhin, Vadim; Kotov, Vladislav; Kotwal, Ashutosh; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasnopevtsev, Dimitriy; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, Jana; Kravchenko, Anton; Kreiss, Sven; Kretz, Moritz; Kretzschmar, Jan; Kreutzfeldt, Kristof; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Kruker, Tobias; Krumnack, Nils; Krumshteyn, Zinovii; Kruse, Amanda; Kruse, Mark; Kruskal, Michael; Kubota, Takashi; Kuday, Sinan; Kuehn, Susanne; Kugel, Andreas; Kuhl, Andrew; Kuhl, Thorsten; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kuna, Marine; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurochkin, Yurii; Kurumida, Rie; Kus, Vlastimil; Kuwertz, Emma Sian; Kuze, Masahiro; Kvita, Jiri; La Rosa, Alessandro; La Rotonda, Laura; Labarga, Luis; Lacasta, Carlos; Lacava, Francesco; Lacey, James; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Laier, Heiko; Lambourne, Luke; Lammers, Sabine; Lampen, Caleb; Lampl, Walter; Lançon, Eric; Landgraf, Ulrich; Landon, Murrough; Lang, Valerie Susanne; Lange, Clemens; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Lassnig, Mario; Laurelli, Paolo; Lavorini, Vincenzo; Lavrijsen, Wim; Law, Alexander; Laycock, Paul; Le, Bao Tran; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Menedeu, Eve; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Claire Alexandra; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Guillaume; Lefebvre, Michel; Legger, Federica; Leggett, Charles; Lehan, Allan; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leister, Andrew Gerard; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Lemmer, Boris; Leney, Katharine; Lenz, Tatjana; Lenzen, Georg; Lenzi, Bruno; Leone, Robert; Leonhardt, Kathrin; Leontsinis, Stefanos; Leroy, Claude; Lester, Christopher; Lester, Christopher Michael; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Levy, Mark; Lewis, Adrian; Lewis, George; Leyko, Agnieszka; Leyton, Michael; Li, Bing; Li, Bo; Li, Haifeng; Li, Ho Ling; Li, Shu; Li, Xuefei; Liang, Zhijun; Liao, Hongbo; Liberti, Barbara; Lichard, Peter; Lie, Ki; Liebal, Jessica; Liebig, Wolfgang; Limbach, Christian; Limosani, Antonio; Limper, Maaike; Lin, Simon; Linde, Frank; Lindquist, Brian Edward; Linnemann, James; Lipeles, Elliot; Lipniacka, Anna; Lisovyi, Mykhailo; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Bo; Liu, Dong; Liu, Jianbei; Liu, Kun; Liu, Lulu; Liu, Miaoyuan; Liu, Minghui; Liu, Yanwen; Livan, Michele; Livermore, Sarah; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lo Sterzo, Francesco; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Loddenkoetter, Thomas; Loebinger, Fred; Loevschall-Jensen, Ask Emil; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Lombardo, Vincenzo Paolo; Long, Jonathan; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Lopez Paredes, Brais; Lorenz, Jeanette; Lorenzo Martinez, Narei; Losada, Marta; Loscutoff, Peter; Losty, Michael; Lou, XinChou; Lounis, Abdenour; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Luehring, Frederick; Lukas, Wolfgang; Luminari, Lamberto; Lundberg, Olof; Lund-Jensen, Bengt; Lungwitz, Matthias; Lynn, David; Lysak, Roman; Lytken, Else; Ma, Hong; Ma, Lian Liang; Maccarrone, Giovanni; Macchiolo, Anna; Maček, Boštjan; Machado Miguens, Joana; Macina, Daniela; Madaffari, Daniele; Madar, Romain; Maddocks, Harvey Jonathan; Mader, Wolfgang; Madsen, Alexander; Maeno, Mayuko; Maeno, Tadashi; Magradze, Erekle; Mahboubi, Kambiz; Mahlstedt, Joern; Mahmoud, Sara; Maiani, Camilla; Maidantchik, Carmen; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Mal, Prolay; Malaescu, Bogdan; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Caitlin; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mamuzic, Judita; Mandelli, Beatrice; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Manfredini, Alessandro; Manhaes de Andrade Filho, Luciano; Manjarres Ramos, Joany Andreina; Mann, Alexander; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Mantifel, Rodger; Mapelli, Livio; March, Luis; Marchand, Jean-Francois; Marchese, Fabrizio; Marchiori, Giovanni; Marcisovsky, Michal; Marino, Christopher; Marques, Carlos; Marroquim, Fernando; Marsden, Stephen Philip; Marshall, Zach; Marti, Lukas Fritz; Marti-Garcia, Salvador; Martin, Brian; Martin, Brian Thomas; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martinez, Homero; Martinez, Mario; Martin-Haugh, Stewart; Martyniuk, Alex; Marx, Marilyn; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massol, Nicolas; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Matsunaga, Hiroyuki; Matsushita, Takashi; Mättig, Peter; Mättig, Stefan; Mattmann, Johannes; Maurer, Julien; Maxfield, Stephen; Maximov, Dmitriy; Mazini, Rachid; Mazzaferro, Luca; Mc Goldrick, Garrin; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McCubbin, Norman; McFarlane, Kenneth; Mcfayden, Josh; Mchedlidze, Gvantsa; Mclaughlan, Tom; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Medinnis, Michael; Meehan, Samuel; Meera-Lebbai, Razzak; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meineck, Christian; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Meloni, Federico; Mendoza Navas, Luis; Mengarelli, Alberto; Menke, Sven; Meoni, Evelin; Mercurio, Kevin Michael; Mergelmeyer, Sebastian; Meric, Nicolas; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Merritt, Hayes; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Carsten; Meyer, Christopher; Meyer, Jean-Pierre; Meyer, Jochen; Middleton, Robin; Migas, Sylwia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Miller, David; Mills, Corrinne; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Miñano Moya, Mercedes; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mirabelli, Giovanni; Mitani, Takashi; Mitrevski, Jovan; Mitsou, Vasiliki A; Mitsui, Shingo; Miucci, Antonio; Miyagawa, Paul; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Mochizuki, Kazuya; Moeller, Victoria; Mohapatra, Soumya; Mohr, Wolfgang; Molander, Simon; Moles-Valls, Regina; Mönig, Klaus; Monini, Caterina; Monk, James; Monnier, Emmanuel; Montejo Berlingen, Javier; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Mora Herrera, Clemencia; Moraes, Arthur; Morange, Nicolas; Morel, Julien; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morgenstern, Marcus; Morii, Masahiro; Moritz, Sebastian; Morley, Anthony Keith; Mornacchi, Giuseppe; Morris, John; Morvaj, Ljiljana; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Muanza, Steve; Mudd, Richard; Mueller, Felix; Mueller, James; Mueller, Klemens; Mueller, Thibaut; Mueller, Timo; Muenstermann, Daniel; Munwes, Yonathan; Murillo Quijada, Javier Alberto; Murray, Bill; Musto, Elisa; Myagkov, Alexey; Myska, Miroslav; Nackenhorst, Olaf; Nadal, Jordi; Nagai, Koichi; Nagai, Ryo; Nagai, Yoshikazu; Nagano, Kunihiro; Nagarkar, Advait; Nagasaka, Yasushi; Nagel, Martin; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakamura, Tomoaki; Nakano, Itsuo; Namasivayam, Harisankar; Nanava, Gizo; Narayan, Rohin; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Nayyar, Ruchika; Neal, Homer; Nechaeva, Polina; Neep, Thomas James; Negri, Andrea; Negri, Guido; Negrini, Matteo; Nektarijevic, Snezana; Nelson, Andrew; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neumann, Manuel; Neusiedl, Andrea; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen, Duong Hai; Nickerson, Richard; Nicolaidou, Rosy; Nicquevert, Bertrand; Nielsen, Jason; Nikiforou, Nikiforos; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolic-Audit, Irena; Nikolics, Katalin; Nikolopoulos, Konstantinos; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nisius, Richard; Nobe, Takuya; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Norberg, Scarlet; Nordberg, Markus; Nowak, Sebastian; Nozaki, Mitsuaki; Nozka, Libor; Ntekas, Konstantinos; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; Nuti, Francesco; O'Brien, Brendan Joseph; O'grady, Fionnbarr; O'Neil, Dugan; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Obermann, Theresa; Ocariz, Jose; Ochi, Atsuhiko; Ochoa, Ines; Oda, Susumu; Odaka, Shigeru; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohman, Henrik; Ohshima, Takayoshi; Okamura, Wataru; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olariu, Albert; Olchevski, Alexander; Olivares Pino, Sebastian Andres; Oliveira Damazio, Denis; Oliver Garcia, Elena; Olivito, Dominick; Olszewski, Andrzej; Olszowska, Jolanta; Onofre, António; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlando, Nicola; Oropeza Barrera, Cristina; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Otero y Garzon, Gustavo; Otono, Hidetoshi; Ouchrif, Mohamed; Ouellette, Eric; Ould-Saada, Farid; Ouraou, Ahmimed; Oussoren, Koen Pieter; Ouyang, Qun; Ovcharova, Ana; Owen, Mark; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pachal, Katherine; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Pagáčová, Martina; Pagan Griso, Simone; Paganis, Efstathios; Pahl, Christoph; Paige, Frank; Pais, Preema; Pajchel, Katarina; Palacino, Gabriel; Palestini, Sandro; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panduro Vazquez, William; Pani, Priscilla; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Papadopoulou, Theodora; Papageorgiou, Konstantinos; Paramonov, Alexander; Paredes Hernandez, Daniela; Parker, Michael Andrew; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passaggio, Stefano; Passeri, Antonio; Pastore, Fernanda; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Patricelli, Sergio; Pauly, Thilo; Pearce, James; Pedersen, Maiken; Pedraza Lopez, Sebastian; Pedro, Rute; Peleganchuk, Sergey; Pelikan, Daniel; Peng, Haiping; Penning, Bjoern; Penwell, John; Perepelitsa, Dennis; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perez Reale, Valeria; Perini, Laura; Pernegger, Heinz; Perrino, Roberto; Peschke, Richard; Peshekhonov, Vladimir; Peters, Krisztian; Peters, Yvonne; Petersen, Brian; Petersen, Jorgen; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Petteni, Michele; Pettersson, Nora Emilia; Pezoa, Raquel; Phillips, Peter William; Piacquadio, Giacinto; Pianori, Elisabetta; Picazio, Attilio; Piccaro, Elisa; Piccinini, Maurizio; Piec, Sebastian Marcin; Piegaia, Ricardo; Pignotti, David; Pilcher, James; Pilkington, Andrew; Pina, João Antonio; Pinamonti, Michele; Pinder, Alex; Pinfold, James; Pingel, Almut; Pinto, Belmiro; Pires, Sylvestre; Pizio, Caterina; Pleier, Marc-Andre; Pleskot, Vojtech; Plotnikova, Elena; Plucinski, Pawel; Poddar, Sahill; Podlyski, Fabrice; Poettgen, Ruth; Poggioli, Luc; Pohl, David-leon; Pohl, Martin; Polesello, Giacomo; Policicchio, Antonio; Polifka, Richard; Polini, Alessandro; Pollard, Christopher Samuel; Polychronakos, Venetios; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Portell Bueso, Xavier; Pospelov, Guennady; Pospisil, Stanislav; Potamianos, Karolos; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Pozdnyakov, Valery; Prabhu, Robindra; Pralavorio, Pascal; Pranko, Aliaksandr; Prasad, Srivas; Pravahan, Rishiraj; Prell, Soeren; Price, Darren; Price, Joe; Price, Lawrence; Prieur, Damien; Primavera, Margherita; Proissl, Manuel; Prokofiev, Kirill; Prokoshin, Fedor; Protopapadaki, Eftychia-sofia; Protopopescu, Serban; Proudfoot, James; Przybycien, Mariusz; Przysiezniak, Helenka; Ptacek, Elizabeth; Pueschel, Elisa; Puldon, David; Purohit, Milind; Puzo, Patrick; Pylypchenko, Yuriy; Qian, Jianming; Qin, Gang; Quadt, Arnulf; Quarrie, David; Quayle, William; Quilty, Donnchadha; Qureshi, Anum; Radeka, Veljko; Radescu, Voica; Radhakrishnan, Sooraj Krishnan; Radloff, Peter; Ragusa, Francesco; Rahal, Ghita; Rajagopalan, Srinivasan; Rammensee, Michael; Rammes, Marcus; Randle-Conde, Aidan Sean; Rangel-Smith, Camila; Rao, Kanury; Rauscher, Felix; Rave, Tobias Christian; Ravenscroft, Thomas; Raymond, Michel; Read, Alexander Lincoln; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Rehnisch, Laura; Reinsch, Andreas; Reisin, Hernan; Relich, Matthew; Rembser, Christoph; Ren, Zhongliang; Renaud, Adrien; Rescigno, Marco; Resconi, Silvia; Rezanova, Olga; Reznicek, Pavel; Rezvani, Reyhaneh; Richter, Robert; Ridel, Melissa; Rieck, Patrick; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Ritsch, Elmar; Riu, Imma; Rizatdinova, Flera; Rizvi, Eram; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robson, Aidan; Roda, Chiara; Rodrigues, Luis; Roe, Shaun; Røhne, Ole; Rolli, Simona; Romaniouk, Anatoli; Romano, Marino; Romeo, Gaston; Romero Adam, Elena; Rompotis, Nikolaos; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Anthony; Rose, Matthew; Rosendahl, Peter Lundgaard; Rosenthal, Oliver; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rosten, Rachel; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexandre; Rozen, Yoram; Ruan, Xifeng; Rubbo, Francesco; Rubinskiy, Igor; Rud, Viacheslav; Rudolph, Christian; Rudolph, Matthew Scott; Rühr, Frederik; Ruiz-Martinez, Aranzazu; Rurikova, Zuzana; Rusakovich, Nikolai; Ruschke, Alexander; Rutherfoord, John; Ruthmann, Nils; Ruzicka, Pavel; Ryabov, Yury; Rybar, Martin; Rybkin, Grigori; Ryder, Nick; Saavedra, Aldo; Sacerdoti, Sabrina; Saddique, Asif; Sadeh, Iftach; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Sakamoto, Hiroshi; Sakurai, Yuki; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salek, David; Sales De Bruin, Pedro Henrique; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvachua Ferrando, Belén; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Sanchez, Arturo; Sánchez, Javier; Sanchez Martinez, Victoria; Sandaker, Heidi; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandoval, Tanya; Sandoval, Carlos; Sandstroem, Rikard; Sankey, Dave; Sansoni, Andrea; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Santoyo Castillo, Itzebelt; Sapp, Kevin; Sapronov, Andrey; Saraiva, João; Sarrazin, Bjorn; Sartisohn, Georg; Sasaki, Osamu; Sasaki, Yuichi; Sauvage, Gilles; Sauvan, Emmanuel; Savard, Pierre; Savu, Dan Octavian; Sawyer, Craig; Sawyer, Lee; Saxon, David; Saxon, James; Sbarra, Carla; Sbrizzi, Antonio; Scanlon, Tim; Scannicchio, Diana; Scarcella, Mark; Schaarschmidt, Jana; Schacht, Peter; Schaefer, Douglas; Schaefer, Ralph; Schaelicke, Andreas; Schaepe, Steffen; Schaetzel, Sebastian; Schäfer, Uli; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R. Dean; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Scherzer, Max; Schiavi, Carlo; Schieck, Jochen; Schillo, Christian; Schioppa, Marco; Schlenker, Stefan; Schmidt, Evelyn; Schmieden, Kristof; Schmitt, Christian; Schmitt, Christopher; Schmitt, Sebastian; Schneider, Basil; Schnellbach, Yan Jie; Schnoor, Ulrike; Schoeffel, Laurent; Schoening, Andre; Schoenrock, Bradley Daniel; Schorlemmer, Andre Lukas; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schramm, Steven; Schreyer, Manuel; Schroeder, Christian; Schuh, Natascha; Schultens, Martin Johannes; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwartzman, Ariel; Schwegler, Philipp; Schwemling, Philippe; Schwienhorst, Reinhard; Schwindling, Jerome; Schwindt, Thomas; Schwoerer, Maud; Sciacca, Gianfranco; Scifo, Estelle; Sciolla, Gabriella; Scott, Bill; Scuri, Fabrizio; Scutti, Federico; Searcy, Jacob; Sedov, George; Sedykh, Evgeny; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Sekula, Stephen; Selbach, Karoline Elfriede; Seliverstov, Dmitry; Sellers, Graham; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Serkin, Leonid; Serre, Thomas; Seuster, Rolf; Severini, Horst; Sforza, Federico; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shank, James; Shao, Qi Tao; Shapiro, Marjorie; Shatalov, Pavel; Shaw, Kate; Sherwood, Peter; Shimizu, Shima; Shimmin, Chase Owen; Shimojima, Makoto; Shiyakova, Mariya; Shmeleva, Alevtina; Shochet, Mel; Short, Daniel; Shrestha, Suyog; Shulga, Evgeny; Shupe, Michael; Shushkevich, Stanislav; Sicho, Petr; Sidorov, Dmitri; Sidoti, Antonio; Siegert, Frank; Sijacki, Djordje; Silbert, Ohad; Silva, José; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simioni, Eduard; Simmons, Brinick; Simoniello, Rosa; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sircar, Anirvan; Sisakyan, Alexei; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skinnari, Louise Anastasia; Skottowe, Hugh Philip; Skovpen, Kirill; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Smakhtin, Vladimir; Smart, Ben; Smestad, Lillian; Smirnov, Sergei; Smirnov, Yury; Smirnova, Lidia; Smirnova, Oxana; Smith, Kenway; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snidero, Giacomo; Snyder, Scott; Sobie, Randall; Socher, Felix; Soffer, Abner; Soh, Dart-yin; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Soldatov, Evgeny; Soldevila, Urmila; Solfaroli Camillocci, Elena; Solodkov, Alexander; Solovyanov, Oleg; Solovyev, Victor; Sommer, Philip; Song, Hong Ye; Soni, Nitesh; Sood, Alexander; Sopko, Vit; Sopko, Bruno; Sosebee, Mark; Soualah, Rachik; Soueid, Paul; Soukharev, Andrey; South, David; Spagnolo, Stefania; Spanò, Francesco; Spearman, William Robert; Spighi, Roberto; Spigo, Giancarlo; Spousta, Martin; Spreitzer, Teresa; Spurlock, Barry; St Denis, Richard Dante; Stahlman, Jonathan; Stamen, Rainer; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stanescu-Bellu, Madalina; Stanitzki, Marcel Michael; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Staszewski, Rafal; Stavina, Pavel; Steele, Genevieve; Steinberg, Peter; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stern, Sebastian; Stewart, Graeme; Stillings, Jan Andre; Stockton, Mark; Stoebe, Michael; Stoerig, Kathrin; Stoicea, Gabriel; Stolte, Philipp; Stonjek, Stefan; Stradling, Alden; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strauss, Emanuel; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Stucci, Stefania Antonia; Stugu, Bjarne; Styles, Nicholas Adam; Su, Dong; Su, Jun; Subramania, Halasya Siva; Subramaniam, Rajivalochan; Succurro, Antonella; Sugaya, Yorihito; Suhr, Chad; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Susinno, Giancarlo; Sutton, Mark; Suzuki, Yu; Svatos, Michal; Swedish, Stephen; Swiatlowski, Maximilian; Sykora, Ivan; Sykora, Tomas; Ta, Duc; Tackmann, Kerstin; Taenzer, Joe; Taffard, Anyes; Tafirout, Reda; Taiblum, Nimrod; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Takubo, Yosuke; Talby, Mossadek; Talyshev, Alexey; Tam, Jason; Tamsett, Matthew; Tan, Kong Guan; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Satoshi; Tanaka, Shuji; Tanasijczuk, Andres Jorge; Tani, Kazutoshi; Tannoury, Nancy; Tapprogge, Stefan; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tashiro, Takuya; Tassi, Enrico; Tavares Delgado, Ademar; Tayalati, Yahya; Taylor, Christopher; Taylor, Frank; Taylor, Geoffrey; Taylor, Wendy; Teischinger, Florian Alfred; Teixeira Dias Castanheira, Matilde; Teixeira-Dias, Pedro; Temming, Kim Katrin; Ten Kate, Herman; Teng, Ping-Kun; Terada, Susumu; Terashi, Koji; Terron, Juan; Terzo, Stefano; Testa, Marianna; Teuscher, Richard; Therhaag, Jan; Theveneaux-Pelzer, Timothée; Thoma, Sascha; Thomas, Juergen; Thomas-Wilsker, Joshuha; Thompson, Emily; Thompson, Paul; Thompson, Peter; Thompson, Stan; Thomsen, Lotte Ansgaard; Thomson, Evelyn; Thomson, Mark; Thong, Wai Meng; Thun, Rudolf; Tian, Feng; Tibbetts, Mark James; Tikhomirov, Vladimir; Tikhonov, Yury; Timoshenko, Sergey; Tiouchichine, Elodie; Tipton, Paul; Tisserant, Sylvain; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tollefson, Kirsten; Tomlinson, Lee; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Topilin, Nikolai; Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Tran, Huong Lan; Trefzger, Thomas; Tremblet, Louis; Tricoli, Alessandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Tripiana, Martin; Triplett, Nathan; Trischuk, William; Trocmé, Benjamin; Troncon, Clara; Trottier-McDonald, Michel; Trovatelli, Monica; True, Patrick; Trzebinski, Maciej; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsirintanis, Nikolaos; Tsiskaridze, Shota; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsuno, Soshi; Tsybychev, Dmitri; Tua, Alan; Tudorache, Alexandra; Tudorache, Valentina; Tuna, Alexander Naip; Tupputi, Salvatore; Turchikhin, Semen; Turecek, Daniel; Turk Cakir, Ilkay; Turra, Ruggero; Tuts, Michael; Tykhonov, Andrii; Tylmad, Maja; Tyndel, Mike; Uchida, Kirika; Ueda, Ikuo; Ueno, Ryuichi; Ughetto, Michael; Ugland, Maren; Uhlenbrock, Mathias; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Ungaro, Francesca; Unno, Yoshinobu; Urbaniec, Dustin; Urquijo, Phillip; Usai, Giulio; Usanova, Anna; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Valencic, Nika; Valentinetti, Sara; Valero, Alberto; Valery, Loic; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; Van Der Deijl, Pieter; van der Geer, Rogier; van der Graaf, Harry; Van Der Leeuw, Robin; van der Ster, Daniel; van Eldik, Niels; van Gemmeren, Peter; Van Nieuwkoop, Jacobus; van Vulpen, Ivo; van Woerden, Marius Cornelis; Vanadia, Marco; Vandelli, Wainer; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Vardanyan, Gagik; Vari, Riccardo; Varnes, Erich; Varol, Tulin; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vazeille, Francois; Vazquez Schroeder, Tamara; Veatch, Jason; Veloso, Filipe; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Venturini, Alessio; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Viazlo, Oleksandr; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Viel, Simon; Vigne, Ralph; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinogradov, Vladimir; Virzi, Joseph; Vitells, Ofer; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vladoiu, Dan; Vlasak, Michal; Vogel, Adrian; Vokac, Petr; Volpi, Guido; Volpi, Matteo; von der Schmitt, Hans; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vos, Marcel; Voss, Rudiger; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vu Anh, Tuan; Vuillermet, Raphael; Vukotic, Ilija; Vykydal, Zdenek; Wagner, Wolfgang; Wagner, Peter; Wahrmund, Sebastian; Wakabayashi, Jun; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Waller, Peter; Walsh, Brian; Wang, Chao; Wang, Chiho; Wang, Fuquan; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Kuhan; Wang, Rui; Wang, Song-Ming; Wang, Tan; Wang, Xiaoxiao; Warburton, Andreas; Ward, Patricia; Wardrope, David Robert; Warsinsky, Markus; Washbrook, Andrew; Wasicki, Christoph; Watanabe, Ippei; Watkins, Peter; Watson, Alan; Watson, Ian; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Ben; Webb, Samuel; Weber, Michele; Weber, Stefan Wolf; Webster, Jordan S; Weidberg, Anthony; Weigell, Philipp; Weinert, Benjamin; Weingarten, Jens; Weiser, Christian; Weits, Hartger; Wells, Phillippa; Wenaus, Torre; Wendland, Dennis; Weng, Zhili; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Wessels, Martin; Wetter, Jeffrey; Whalen, Kathleen; White, Andrew; White, Martin; White, Ryan; White, Sebastian; Whiteson, Daniel; Wicke, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik-Fuchs, Liv Antje Mari; Wijeratne, Peter Alexander; Wildauer, Andreas; Wildt, Martin Andre; Wilkens, Henric George; Will, Jonas Zacharias; Williams, Hugh; Williams, Sarah; Willis, Christopher; Willocq, Stephane; Wilson, John; Wilson, Alan; Wingerter-Seez, Isabelle; Winkelmann, Stefan; Winklmeier, Frank; Wittgen, Matthias; Wittig, Tobias; Wittkowski, Josephine; Wollstadt, Simon Jakob; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wozniak, Krzysztof; Wright, Michael; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wulf, Evan; Wyatt, Terry Richard; Wynne, Benjamin; Xella, Stefania; Xiao, Meng; Xu, Da; Xu, Lailin; Yabsley, Bruce; Yacoob, Sahal; Yamada, Miho; Yamaguchi, Hiroshi; Yamaguchi, Yohei; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamanaka, Takashi; Yamauchi, Katsuya; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Hongtao; Yang, Un-Ki; Yang, Yi; Yanush, Serguei; Yao, Liwen; Yao, Weiming; Yasu, Yoshiji; Yatsenko, Elena; Yau Wong, Kaven Henry; Ye, Jingbo; Ye, Shuwei; Yen, Andy L; Yildirim, Eda; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Rikutaro; Yoshihara, Keisuke; Young, Charles; Young, Christopher John; Youssef, Saul; Yu, David Ren-Hwa; Yu, Jaehoon; Yu, Jiaming; Yu, Jie; Yuan, Li; Yurkewicz, Adam; Zabinski, Bartlomiej; Zaidan, Remi; Zaitsev, Alexander; Zaman, Aungshuman; Zambito, Stefano; Zanello, Lucia; Zanzi, Daniele; Zaytsev, Alexander; Zeitnitz, Christian; Zeman, Martin; Zemla, Andrzej; Zengel, Keith; Zenin, Oleg; Ženiš, Tibor; Zerwas, Dirk; Zevi della Porta, Giovanni; Zhang, Dongliang; Zhang, Fangzhou; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Lei; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Zhengguo; Zhemchugov, Alexey; Zhong, Jiahang; Zhou, Bing; Zhou, Lei; Zhou, Ning; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zibell, Andre; Zieminska, Daria; Zimine, Nikolai; Zimmermann, Christoph; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Zinonos, Zinonas; Ziolkowski, Michael; Zitoun, Robert; Zobernig, Georg; Zoccoli, Antonio; zur Nedden, Martin; Zurzolo, Giovanni; Zutshi, Vishnu; Zwalinski, Lukasz

    2014-01-01

    The liquid argon calorimeter is a key component of the ATLAS detector installed at the CERN Large Hadron Collider. The primary purpose of this calorimeter is the measurement of electrons and photons. It also provides a crucial input for measuring jets and missing transverse momentum. An advanced data monitoring procedure was designed to quickly identify issues that would affect detector performance and ensure that only the best quality data are used for physics analysis. This article presents the validation procedure developed during the 2011 and 2012 LHC data-taking periods, in which more than 98% of the proton–proton luminosity recorded by ATLAS at a centre-of-mass energy of 7–8 TeV had calorimeter data quality suitable for physics analysis.

  13. Commissioning and first operation of the pCVD diamond ATLAS Beam Conditions Monitor

    CERN Document Server

    Dobos, D

    2009-01-01

    The main aim of the ATLAS Beam Conditions Monitor is to protect the ATLAS Inner Detector silicon trackers from high radiation doses caused by LHC beam incidents, e.g. magnet failures. The BCM uses in total 16 1x1 cm2 500 μm thick polycrystalline chemical vapor deposition (pCVD) diamond sensors. They are arranged in 8 positions around the ATLAS LHC interaction point. Time difference measurements with sub nanosecond resolution are performed to distinguish between particles from a collision and spray particles from a beam incident. An abundance of the latter leads the BCM to provoke an abort of the LHC beam. A FPGA based readout system with a sampling rate of 2.56 GHz performs the online data analysis and interfaces the results to ATLAS and the beam abort system. The BCM diamond sensors, the detector modules and their readout system are described. Results of the operation with the first LHC beams are reported and results of commissioning and timing measurements (e.g. with cosmic muons) in preparation for first ...

  14. Monitoring the Radiation Damage of the ATLAS Pixel Detector

    CERN Document Server

    Cooke, M; The ATLAS collaboration

    2012-01-01

    The Pixel Detector is the innermost charged particle tracking component employed by the ATLAS experiment at the CERN Large Hadron Collider (LHC). The instantaneous luminosity delivered by the LHC, now routinely in excess of 5x10^{33} cm^{-2} s^{-1}, results in a rapidly increasing accumulated radiation dose to the detector. Methods based on the sensor depletion properties and leakage current are used to monitor the evolution of the radiation damage, and results from the 2011 run are presented.

  15. Monitoring the radiation damage of the ATLAS pixel detector

    International Nuclear Information System (INIS)

    Cooke, M.

    2013-01-01

    The pixel detector is the innermost charged particle tracking component employed by the ATLAS experiment at the CERN Large Hadron Collider (LHC). The instantaneous luminosity delivered by the LHC, now routinely in excess of 5×10 33 cm −2 s −1 , results in a rapidly increasing accumulated radiation dose to the detector. Methods based on the sensor depletion properties and leakage current are used to monitor the evolution of the radiation damage, and results from the 2011 run are presented

  16. ATLAS Offline Data Quality Monitoring

    CERN Document Server

    Adelman, J; Boelaert, N; D'Onofrio, M; Frost, J A; Guyot, C; Hauschild, M; Hoecker, A; Leney, K J C; Lytken, E; Martinez-Perez, M; Masik, J; Nairz, A M; Onyisi, P U E; Roe, S; Schatzel, S; Schaetzel, S; Wilson, M G

    2010-01-01

    The ATLAS experiment at the Large Hadron Collider reads out 100 Million electronic channels at a rate of 200 Hz. Before the data are shipped to storage and analysis centres across the world, they have to be checked to be free from irregularities which render them scientifically useless. Data quality offline monitoring provides prompt feedback from full first-pass event reconstruction at the Tier-0 computing centre and can unveil problems in the detector hardware and in the data processing chain. Detector information and reconstructed proton-proton collision event characteristics are distilled into a few key histograms and numbers which are automatically compared with a reference. The results of the comparisons are saved as status flags in a database and are published together with the histograms on a web server. They are inspected by a 24/7 shift crew who can notify on-call experts in case of problems and in extreme cases signal data taking abort.

  17. A camac-based intelligent subsystem for ATLAS example application: cryogenic monitoring and control

    International Nuclear Information System (INIS)

    Pardo, R.; Kawarasaki, Y.; Wasniewski, K.

    1985-01-01

    A subunit of the CAMAC accelerator control system of ATLAS for monitoring and, eventually, controlling the cryogenic refrigeration and distribution facility is under development. This development is the first application of a philosophy of distributed intelligence which will be applied throughout the ATLAS control system. The control concept is that of an intelligent subunit of the existing ATLAS CAMAC control highway. A single board computer resides in an auxiliary crate controller which allows access to all devices within the crate. The local SBC can communicate to the host over the CAMAC highway via a protocol involving the use of memory in the SBC which can be accessed from the host in a DMA mode. This provides a mechanism for global communications, such as for alarm conditions, as well as allowing the cryogenic system to respond to the demands of the accelerator system

  18. Control, Test and Monitoring Software Framework for the ATLAS Level-1 Calorimeter Trigger

    CERN Document Server

    Achenbach, R; Aharrouche, M; Andrei, V; Åsman, B; Barnett, B M; Bauss, B; Bendel, M; Bohm, C; Booth, J R A; Bracinik, J; Brawn, I P; Charlton, D G; Childers, J T; Collins, N J; Curtis, C J; Davis, A O; Eckweiler, S; Eisenhandler, E F; Faulkner, P J W; Fleckner, J; Föhlisch, F; Gee, C N P; Gillman, A R; Goringer, C; Groll, M; Hadley, D R; Hanke, P; Hellman, S; Hidvegi, A; Hillier, S J; Johansen, M; Kluge, E E; Kühl, T; Landon, M; Lendermann, V; Lilley, J N; Mahboubi, K; Mahout, G; Meier, K; Middleton, R P; Moa, T; Morris, J D; Müller, F; Neusiedl, A; Ohm, C; Oltmann, B; Perera, V J O; Prieur, D P F; Qian, W; Rieke, S; Rühr, F; Sankey, D P C; Schäfer, U; Schmitt, K; Schultz-Coulon, H C; Silverstein, S; Sjölin, J; Staley, R J; Stamen, R; Stockton, M C; Tan, C L A; Tapprogge, S; Thomas, J P; Thompson, P D; Watkins, P M; Watson, A; Weber, P; Wessels, M; Wildt, M

    2008-01-01

    The ATLAS first-level calorimeter trigger is a hardware-based system designed to identify high-pT jets, electron/photon and tau candidates and to measure total and missing ET in the ATLAS calorimeters. The complete trigger system consists of over 300 customdesignedVME modules of varying complexity. These modules are based around FPGAs or ASICs with many configurable parameters, both to initialize the system with correct calibrations and timings and to allow flexibility in the trigger algorithms. The control, testing and monitoring of these modules requires a comprehensive, but well-designed and modular, software framework, which we will describe in this paper.

  19. Upgrades for Offline Data Quality Monitoring at ATLAS

    CERN Document Server

    Joergensen, M D; The ATLAS collaboration; Frost, J

    2013-01-01

    The ATLAS offline data quality monitoring infrastructure functioned successfully during the 2010-2012 run of the LHC. During the 2013-14 long shutdown, a large number of upgrades will be made in response to user needs and to take advantage of new technologies - for example, deploying richer web applications, improving dynamic visualization of data, streamlining configuration, and moving applications to a common messaging bus. Additionally consolidation and integration activities will occur. We will discuss lessons learned so far and the progress of the upgrade project, as well as associated improvements to the data reconstruction and processing chain.

  20. A System for Monitoring and Tracking the LHC Beam Spot within the ATLAS High Level Trigger

    CERN Document Server

    Bartoldus, R; The ATLAS collaboration; Cogan, J; Salnikov, A; Strauss, E; Winklmeier, F

    2012-01-01

    The parameters of the beam spot produced by the LHC in the ATLAS interaction region are computed online using the ATLAS High Level Trigger (HLT) system. The high rate of triggered events is exploited to make precise measurements of the position, size and orientation of the luminous region in near real-time, as these parameters change significantly even during a single data-taking run. We present the challenges, solutions and results for the online determination, monitoring and beam spot feedback system in ATLAS. A specially designed algorithm, which uses tracks registered in the silicon detectors to reconstruct event vertices, is executed on the HLT processor farm of several thousand CPU cores. Monitoring histograms from all the cores are sampled and aggregated across the farm every 60 seconds. The reconstructed beam values are corrected for detector resolution effects, measured in situ from the separation of vertices whose tracks have been split into two collections. Furthermore, measurements for individual ...

  1. Dashboard Task Monitor for Managing ATLAS User Analysis on the Grid

    Science.gov (United States)

    Sargsyan, L.; Andreeva, J.; Jha, M.; Karavakis, E.; Kokoszkiewicz, L.; Saiz, P.; Schovancova, J.; Tuckett, D.; Atlas Collaboration

    2014-06-01

    The organization of the distributed user analysis on the Worldwide LHC Computing Grid (WLCG) infrastructure is one of the most challenging tasks among the computing activities at the Large Hadron Collider. The Experiment Dashboard offers a solution that not only monitors but also manages (kill, resubmit) user tasks and jobs via a web interface. The ATLAS Dashboard Task Monitor provides analysis users with a tool that is independent of the operating system and Grid environment. This contribution describes the functionality of the application and its implementation details, in particular authentication, authorization and audit of the management operations.

  2. Dashboard task monitor for managing ATLAS user analysis on the grid

    International Nuclear Information System (INIS)

    Sargsyan, L; Andreeva, J; Karavakis, E; Saiz, P; Tuckett, D; Jha, M; Kokoszkiewicz, L; Schovancova, J

    2014-01-01

    The organization of the distributed user analysis on the Worldwide LHC Computing Grid (WLCG) infrastructure is one of the most challenging tasks among the computing activities at the Large Hadron Collider. The Experiment Dashboard offers a solution that not only monitors but also manages (kill, resubmit) user tasks and jobs via a web interface. The ATLAS Dashboard Task Monitor provides analysis users with a tool that is independent of the operating system and Grid environment. This contribution describes the functionality of the application and its implementation details, in particular authentication, authorization and audit of the management operations.

  3. Upgrade and integration of the configuration and monitoring tools for the ATLAS Online farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Darlea, G L; Dumitru, I; Scannicchio, DA; Twomey, M S; Valsan, M L; Zaytsev, A

    2012-01-01

    The ATLAS Online farm is a non-homogeneous cluster of nearly 3000 PCs which run the data acquisition, trigger and control of the ATLAS detector. The systems are configured and monitored by a combination of open-source tools, such as Quattor and Nagios, and tools developed in-house, such as ConfDB. We report on the ongoing introduction of new provisioning and configuration tools, Puppet and ConfDB v2 which are more flexible and allow automation for previously uncovered needs, and on the upgrade and integration of the monitoring and alerting tools, including the interfacing of these with the TDAQ Shifter Assistant software and their integration with configuration tools. We discuss the selection of the tools and the assessment of their functionality and performance, and how they enabled the introduction of virtualization for selected services.

  4. Upgrade and integration of the configuration and monitoring tools for the ATLAS Online farm

    International Nuclear Information System (INIS)

    Ballestrero, S; Darlea, G–L; Twomey, M S; Brasolin, F; Dumitru, I; Valsan, M L; Scannicchio, D A; Zaytsev, A

    2012-01-01

    The ATLAS Online farm is a non-homogeneous cluster of nearly 3000 systems which run the data acquisition, trigger and control of the ATLAS detector. The systems are configured and monitored by a combination of open-source tools, such as Quattor and Nagios, and tools developed in-house, such as ConfDB. We report on the ongoing introduction of new provisioning and configuration tools, Puppet and ConfDB v2, which are more flexible and allow automation for previously uncovered needs, and on the upgrade and integration of the monitoring and alerting tools, including the interfacing of these with the TDAQ Shifter Assistant software and their integration with configuration tools. We discuss the selection of the tools and the assessment of their functionality and performance, and how they enabled the introduction of virtualization for selected services.

  5. Software Validation in ATLAS

    International Nuclear Information System (INIS)

    Hodgkinson, Mark; Seuster, Rolf; Simmons, Brinick; Sherwood, Peter; Rousseau, David

    2012-01-01

    The ATLAS collaboration operates an extensive set of protocols to validate the quality of the offline software in a timely manner. This is essential in order to process the large amounts of data being collected by the ATLAS detector in 2011 without complications on the offline software side. We will discuss a number of different strategies used to validate the ATLAS offline software; running the ATLAS framework software, Athena, in a variety of configurations daily on each nightly build via the ATLAS Nightly System (ATN) and Run Time Tester (RTT) systems; the monitoring of these tests and checking the compilation of the software via distributed teams of rotating shifters; monitoring of and follow up on bug reports by the shifter teams and periodic software cleaning weeks to improve the quality of the offline software further.

  6. ATLAS Future Framework Requirements Group Report

    CERN Document Server

    The ATLAS collaboration

    2016-01-01

    The Future Frameworks Requirements Group was constituted in Summer 2013 to consider and summarise the framework requirements from trigger and offline for configuring, scheduling and monitoring the data processing software needed by the ATLAS experiment. The principal motivation for such a re-examination arises from the current and anticipated evolution of CPUs, where multiple cores, hyper-threading and wide vector registers require a shift to a concurrent programming model. Such a model requires extensive changes in the current Gaudi/Athena frameworks and offers the opportunity to consider how HLT and offline processing can be better accommodated within the ATLAS framework. This note contains the report of the Future Frameworks Requirements Group.

  7. The AAL project: automated monitoring and intelligent analysis for the ATLAS data taking infrastructure

    International Nuclear Information System (INIS)

    Kazarov, A; Miotto, G Lehmann; Magnoni, L

    2012-01-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment at CERN is the infrastructure responsible for collecting and transferring ATLAS experimental data from detectors to the mass storage system. It relies on a large, distributed computing environment, including thousands of computing nodes with thousands of application running concurrently. In such a complex environment, information analysis is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking runs, streams of messages sent by applications via the message reporting system together with data published from applications via information services are the main sources of knowledge about correctness of running operations. The flow of data produced (with an average rate of O(1-10KHz)) is constantly monitored by experts to detect problem or misbehavior. This requires strong competence and experience in understanding and discovering problems and root causes, and often the meaningful information is not in the single message or update, but in the aggregated behavior in a certain time-line. The AAL project is meant at reducing the man power needs and at assuring a constant high quality of problem detection by automating most of the monitoring tasks and providing real-time correlation of data-taking and system metrics. This project combines technologies coming from different disciplines, in particular it leverages on an Event Driven Architecture to unify the flow of data from the ATLAS infrastructure, on a Complex Event Processing (CEP) engine for correlation of events and on a message oriented architecture for components integration. The project is composed of 2 main components: a core processing engine, responsible for correlation of events through expert-defined queries and a web based front-end to present real-time information and interact with the system. All components works in a loose-coupled event based architecture, with a message broker

  8. The AAL project: automated monitoring and intelligent analysis for the ATLAS data taking infrastructure

    Science.gov (United States)

    Kazarov, A.; Lehmann Miotto, G.; Magnoni, L.

    2012-06-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment at CERN is the infrastructure responsible for collecting and transferring ATLAS experimental data from detectors to the mass storage system. It relies on a large, distributed computing environment, including thousands of computing nodes with thousands of application running concurrently. In such a complex environment, information analysis is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking runs, streams of messages sent by applications via the message reporting system together with data published from applications via information services are the main sources of knowledge about correctness of running operations. The flow of data produced (with an average rate of O(1-10KHz)) is constantly monitored by experts to detect problem or misbehavior. This requires strong competence and experience in understanding and discovering problems and root causes, and often the meaningful information is not in the single message or update, but in the aggregated behavior in a certain time-line. The AAL project is meant at reducing the man power needs and at assuring a constant high quality of problem detection by automating most of the monitoring tasks and providing real-time correlation of data-taking and system metrics. This project combines technologies coming from different disciplines, in particular it leverages on an Event Driven Architecture to unify the flow of data from the ATLAS infrastructure, on a Complex Event Processing (CEP) engine for correlation of events and on a message oriented architecture for components integration. The project is composed of 2 main components: a core processing engine, responsible for correlation of events through expert-defined queries and a web based front-end to present real-time information and interact with the system. All components works in a loose-coupled event based architecture, with a message broker

  9. The ATLAS beam conditions monitor

    CERN Document Server

    Mikuz, M; Dolenc, I; Kagan, H; Kramberger, G; Frais-Kölbl, H; Gorisek, A; Griesmayer, E; Mandic, I; Pernegger, H; Trischuk, W; Weilhammer, P; Zavrtanik, M

    2006-01-01

    The ATLAS beam conditions monitor is being developed as a stand-alone device allowing to separate LHC collisions from background events induced either on beam gas or by beam accidents, for example scraping at the collimators upstream the spectrometer. This separation can be achieved by timing coincidences between two stations placed symmetric around the interaction point. The 25 ns repetition of collisions poses very stringent requirements on the timing resolution. The optimum separation between collision and background events is just 12.5 ns implying a distance of 3.8 m between the two stations. 3 ns wide pulses are required with 1 ns rise time and baseline restoration in 10 ns. Combined with the radiation field of 10/sup 15/ cm/sup -2/ in 10 years of LHC operation only diamond detectors are considered suitable for this task. pCVD diamond pad detectors of 1 cm/sup 2/ and around 500 mum thickness were assembled with a two-stage RF current amplifier and tested in proton beam at MGH, Boston and SPS pion beam at...

  10. The AAL project: automated monitoring and intelligent analysis for the ATLAS data taking infrastructure

    CERN Document Server

    Kazarov, A; The ATLAS collaboration; Magnoni, L

    2011-01-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment at CERN is the infrastructure responsible for filtering and transferring ATLAS experimental data from detectors to the mass storage system. It relies on a large, distributed computing environment, including thousands of computing nodes with thousands of application running concurrently. In such a complex environment, information analysis is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking runs, streams of messages sent by applications via the message reporting system together with data published from applications via information services are the main sources of knowledge about correctness of running operations. The huge flow of data produced (with an average rate of O(1-10KHz)) is constantly monitored by experts to detect problem or misbehavior. This require strong competence and experience in understanding and discovering problems and root causes, and often the meaningful in...

  11. ATLAS DBM Module Qualification

    Energy Technology Data Exchange (ETDEWEB)

    Soha, Aria [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gorisek, Andrej [J. Stefan Inst., Ljubljana (Slovenia); Zavrtanik, Marko [J. Stefan Inst., Ljubljana (Slovenia); Sokhranyi, Grygorii [J. Stefan Inst., Ljubljana (Slovenia); McGoldrick, Garrin [Univ. of Toronto, ON (Canada); Cerv, Matevz [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2014-06-18

    This is a technical scope of work (TSW) between the Fermi National Accelerator Laboratory (Fermilab) and the experimenters of Jozef Stefan Institute, CERN, and University of Toronto who have committed to participate in beam tests to be carried out during the 2014 Fermilab Test Beam Facility program. Chemical Vapour Deposition (CVD) diamond has a number of properties that make it attractive for high energy physics detector applications. Its large band-gap (5.5 eV) and large displacement energy (42 eV/atom) make it a material that is inherently radiation tolerant with very low leakage currents and high thermal conductivity. CVD diamond is being investigated by the RD42 Collaboration for use very close to LHC interaction regions, where the most extreme radiation conditions are found. This document builds on that work and proposes a highly spatially segmented diamond-based luminosity monitor to complement the time-segmented ATLAS Beam Conditions Monitor (BCM) so that, when Minimum Bias Trigger Scintillators (MTBS) and LUCID (LUminosity measurement using a Cherenkov Integrating Detector) have difficulty functioning, the ATLAS luminosity measurement is not compromised.

  12. ATLAS Pixel Detector Operational Experience

    CERN Document Server

    Di Girolamo, B; The ATLAS collaboration

    2011-01-01

    The ATLAS Pixel Detector is the innermost detector of the ATLAS experiment at the Large Hadron Collider at CERN, providing high-resolution measurements of charged particle tracks in the high radiation environment close to the collision region. This capability is vital for the identification and measurement of proper decay times of long-lived particles such as b-hadrons, and thus vital for the ATLAS physics program. The detector provides hermetic coverage with three cylindrical layers and three layers of forward and backward pixel detectors. It consists of approximately 80 million pixels that are individually read out via chips bump-bonded to 1744 n-in-n silicon substrates. In this talk, results from the successful operation of the Pixel Detector at the LHC will be presented, including monitoring, calibration procedures, timing optimization and detector performance. The detector performance is excellent: 96.9% of the pixels are operational, noise occupancy and hit efficiency exceed the design specification, an...

  13. The ATLAS DDM Tracer monitoring framework

    CERN Document Server

    ZANG, D; The ATLAS collaboration; BARISITS, M; LASSNIG, M; Andrew STEWART, G; MOLFETAS, A; BEERMANN, T

    2012-01-01

    The DDM Tracer Service is aimed to trace and monitor the atlas file operations on the Worldwide LHC Computing Grid. The volume of traces has increased significantly since the service started in 2009. Now there are about ~5 million trace messages every day and peaks of greater than 250Hz, with peak rates continuing to climb, which gives the current service structure a big challenge. Analysis of large datasets based on on-demand queries to the relational database management system (RDBMS), i.e. Oracle, can be problematic, and have a significant effect on the database's performance. Consequently, We have investigated some new high availability technologies like messaging infrastructure, specifically ActiveMQ, and key-value stores. The advantages of key value store technology are that they are distributed and have high scalability; also their write performances are usually much better than RDBMS, all of which are very useful for the Tracer service. Indexes and distributed counters have been also tested to improve...

  14. Integrated System for Performance Monitoring of ATLAS TDAQ Network

    CERN Document Server

    Savu, D; The ATLAS collaboration; Martin, B; Sjoen, R; Batraneanu, S; Stancu, S

    2010-01-01

    The ATLAS TDAQ Network consists of three separate networks spanning four levels of the experimental building. Over 200 edge switches and 5 multi-blade chassis routers are used to interconnect 2000 processors, adding up to more than 7000 high speed interfaces. In order to substantially speed-up ad-hoc and post mortem analysis, a scalable, yet flexible, integrated system for monitoring both network statistics and environmental conditions, processor parameters and data taking characteristics was required. For successful up-to-the-minute monitoring, information from many SNMP compliant devices, independent databases and custom APIs was gathered, stored and displayed in an optimal way. Easy navigation and compact aggregation of multiple data sources were the main requirements; characteristics not found in any of the tested products, either open-source or commercial. This paper describes how performance, scalability and display issues were addressed and what challenges the project faced during development and deplo...

  15. Predictive analytics tools to adjust and monitor performance metrics for the ATLAS Production System

    CERN Document Server

    Titov, Mikhail; The ATLAS collaboration

    2017-01-01

    Every scientific workflow involves an organizational part which purpose is to plan an analysis process thoroughly according to defined schedule, thus to keep work progress efficient. Having such information as an estimation of the processing time or possibility of system outage (abnormal behaviour) will improve the planning process, provide an assistance to monitor system performance and predict its next state. The ATLAS Production System is an automated scheduling system that is responsible for central production of Monte-Carlo data, highly specialized production for physics groups, as well as data pre-processing and analysis using such facilities as grid infrastructures, clouds and supercomputers. With its next generation (ProdSys2) the processing rate is around 2M tasks per year that is more than 365M jobs per year. ProdSys2 evolves to accommodate a growing number of users and new requirements from the ATLAS Collaboration, physics groups and individual users. ATLAS Distributed Computing in its current stat...

  16. The AAL project: Automated monitoring and intelligent AnaLysis for the ATLAS data taking infrastructure

    CERN Document Server

    Magnoni, L; The ATLAS collaboration; Kazarov, A

    2011-01-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment at CERN is the infrastructure responsible for filtering and transferring ATLAS experimental data from detectors to the mass storage system. It relies on a large, distributed computing environment, including thousands of computing nodes with thousands of application running concurrently. In such a complex environment, information analysis is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking runs, streams of messages sent by applications via the message reporting system together with data published from applications via information services are the main sources of knowledge about correctness of running operations. The huge flow of data produced (with an average rate of O(1-10KHz)) is constantly monitored by experts to detect problem or misbehavior. This require strong competence and experience in understanding and discovering problems and root causes, and often the meaningful in...

  17. Diamond pad detector telescope for beam conditions and luminosity monitoring in ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Mikuz, M. [Jozef Stefan Institute and Department of Physics, University of Ljubljana, Ljubljana (Slovenia)], E-mail: Marko.Mikuz@ijs.si; Cindro, V.; Dolenc, I. [Jozef Stefan Institute and Department of Physics, University of Ljubljana, Ljubljana (Slovenia); Frais-Koelbl, H. [University of Applied Sciences Wiener Neustadt and Fotec, Wiener Neustadt (Austria); Gorisek, A. [CERN, Geneva (Switzerland); Griesmayer, E. [University of Applied Sciences Wiener Neustadt and Fotec, Wiener Neustadt (Austria); Kagan, H. [Ohio State University, Columbus (United States); Kramberger, G.; Mandic, I. [Jozef Stefan Institute and Department of Physics, University of Ljubljana, Ljubljana (Slovenia); Niegl, M. [University of Applied Sciences Wiener Neustadt and Fotec, Wiener Neustadt (Austria); Pernegger, H. [CERN, Geneva (Switzerland); Trischuk, W. [University of Toronto, Toronto (Canada); Weilhammer, P. [CERN, Geneva (Switzerland); Zavrtanik, M. [Jozef Stefan Institute and Department of Physics, University of Ljubljana, Ljubljana (Slovenia)

    2007-09-01

    Beam conditions and the potential detector damage resulting from their anomalies have pushed the LHC experiments to plan their own monitoring devices in addition to those provided by the machine. ATLAS decided to build a telescope composed of two stations with four diamond pad detector modules each, placed symmetrically around the interaction point at z={+-}183.8cm and r{approx}55mm ({eta}{approx}4.2). Equipped with fast electronics it allows time-of-flight separation of events resulting from beam anomalies from normally occurring p-p interactions. In addition it will provide a coarse measurement of the LHC luminosity in ATLAS. Ten detector modules have been assembled and subjected to tests, from characterization of bare diamonds to source and beam tests. Preliminary results of beam test in the CERN PS indicate a signal-to-noise ratio of 14{+-}2.

  18. Diamond pad detector telescope for beam conditions and luminosity monitoring in ATLAS

    International Nuclear Information System (INIS)

    Mikuz, M.; Cindro, V.; Dolenc, I.; Frais-Koelbl, H.; Gorisek, A.; Griesmayer, E.; Kagan, H.; Kramberger, G.; Mandic, I.; Niegl, M.; Pernegger, H.; Trischuk, W.; Weilhammer, P.; Zavrtanik, M.

    2007-01-01

    Beam conditions and the potential detector damage resulting from their anomalies have pushed the LHC experiments to plan their own monitoring devices in addition to those provided by the machine. ATLAS decided to build a telescope composed of two stations with four diamond pad detector modules each, placed symmetrically around the interaction point at z=±183.8cm and r∼55mm (η∼4.2). Equipped with fast electronics it allows time-of-flight separation of events resulting from beam anomalies from normally occurring p-p interactions. In addition it will provide a coarse measurement of the LHC luminosity in ATLAS. Ten detector modules have been assembled and subjected to tests, from characterization of bare diamonds to source and beam tests. Preliminary results of beam test in the CERN PS indicate a signal-to-noise ratio of 14±2

  19. Report to users of ATLAS

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1995-05-01

    This report contains discussing in the following areas: Status of the Atlas accelerator; highlights of recent research at Atlas; concept for an advanced exotic beam facility based on Atlas; program advisory committee; Atlas executive committee; and Atlas and ANL physics division on the world wide web

  20. ATLAS fast physics monitoring

    Indian Academy of Sciences (India)

    The ATLAS Collaboration has set up a framework to automatically process the rapidly growing dataset and produce performance and physics plots for the most interesting analyses. The system is designed to give fast feedback. The histograms are produced within hours of data reconstruction (2–3 days after data taking).

  1. Construction of monitored drift tube chambers for ATLAS end-cap muon spectrometer at IHEP (Protvino)

    CERN Document Server

    Bensinger, J; Borisov, A; Fakhrutdinov, R M; Goryatchev, S; Goryachev, V N; Gushchin, V; Hashemi, K S; Kojine, A; Kononov, A I; Larionov, A; Paramoshkina, E; Pilaev, A; Skvorodnev, N; Tchougouev, A; Wellenstein, H

    2002-01-01

    Trapezoidal-shaped Monitored Drift Tube (MDT) chambers will be used in end-caps of ATLAS muon spectrometer. Design and construction technology of such chambers in IHEP (Protvino) is presented. X-ray tomography results confirm desirable 20 mum precision of wire location in the chamber.

  2. Using Micromegas in ATLAS to Monitor the Luminosity

    CERN Document Server

    The ATLAS collaboration

    2013-01-01

    Five small prototype micromegas detectors were positioned in the ATLAS detector during LHC running at $\\sqrt{s} = 8\\, \\mathrm{TeV}$. A $9\\times 4.5\\, \\mathrm{cm^2}$ two-gap detector was placed in front of the electromagnetic calorimeter and four $9\\times 10\\, \\mathrm{cm^2}$ detectors on the ATLAS Small Wheels, the first station of the forward muon spectrometer. The one attached to the calorimeter was exposed to interaction rates of about $70\\,\\mathrm{kHz/cm^2}$ at ATLAS luminosity $\\mathcal{L}=5\\times 10^{33}\\,\\mathrm{cm^{-2}s^{-1}}$ two orders of magnitude higher than the rates in the Small Wheel. We compare the currents drawn by the detector installed in front of the electromagnetic calorimeter with the luminosity measurement in ATLAS experiment.

  3. ATLAS Virtual Visits bringing the world into the ATLAS control room

    CERN Document Server

    AUTHOR|(CDS)2051192; The ATLAS collaboration; Yacoob, Sahal

    2016-01-01

    ATLAS Virtual Visits is a project initiated in 2011 for the Education & Outreach program of the ATLAS Experiment at CERN. Its goal is to promote public appreciation of the LHC physics program and particle physics, in general, through direct dialogue between ATLAS physicists and remote audiences. A Virtual Visit is an IP-based videoconference, coupled with a public webcast and video recording, between ATLAS physicists and remote locations around the world, that typically include high school or university classrooms, Masterclasses, science fairs, or other special events, usually hosted by collaboration members. Over the past two years, more than 10,000 people, from all of the world’s continents, have actively participated in ATLAS Virtual Visits, with many more enjoying the experience from the publicly available webcasts and recordings. We present an overview of our experience and discuss potential development for the future.

  4. Upgrade of the ATLAS Monitored Drift Tube Frontend Electronics for the HL-LHC

    CERN Document Server

    Zhu, Junjie; The ATLAS collaboration

    2017-01-01

    The ATLAS monitored drift tube (MDT) chambers are the main component of the precision tracking system in the ATLAS muon spectrometer. The MDT system is capable of measuring the sagitta of muon tracks to an accuracy of 60 μm, which corresponds to a momentum accuracy of about 10% at pT=1 TeV. To cope with large amount of data and high event rate expected from the High-Luminosity LHC (HL-LHC) upgrade, ATLAS plans to use the MDT detector at the first-trigger level to improve the muon transverse momentum resolution and reduce the trigger rate. The new MDT trigger and readout system will have an output event rate of 1 MHz and a latency of 6 us at the first-level trigger. The signals from MDT tubes are first processed by an Amplifier/Shaper/Discriminator (ASD) ASIC, and the binary differential signals output by the ASDs are then router to the Time-to-Digital Converter (TDC) ASIC, where the arrival times of leading and trailing edges are digitized in a time bin of 0.78 ns which leads to an RMS timing error of 0.25 n...

  5. ATLAS Tile Calorimeter time calibration, monitoring and performance

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00075913; The ATLAS collaboration

    2016-01-01

    The Tile Calorimeter (TileCal) is the hadronic calorimeter covering the central region of the ATLAS experiment at the LHC. This sampling device is made of plastic scintillating tiles alternated with iron plates and its response is calibrated to electromagnetic scale by means of several dedicated calibration systems. The accurate time calibration is important for the energy reconstruction, non-collision background removal as well as for specific physics analyses. The initial time calibration with so-called splash events and subsequent fine-tuning with collision data are presented. The monitoring of the time calibration with laser system and physics collision data is discussed as well as the corrections for sudden changes performed still before the recorded data are processed for physics analyses. Finally, the time resolution as measured with jets and isolated muons particles is presented.

  6. Future of the ATLAS heavy ion program

    CERN Document Server

    ATLAS-Collaboration, The; The ATLAS collaboration

    2012-01-01

    The primary goal of the heavy ion program at the LHC is to study the properties of deconfined strongly interacting matter, often referred to as ``quark-gluon plasma'' (QGP), created in ultra-relativistic nuclear collisions. That matter is found to be strongly coupled with a viscosity to entropy ratio near a conjectured quantum lower bound. ATLAS foresees a rich program of studies using jets, Upsilons, measurements of global event properties and measurements in proton-nucleus collisions that will measure fundamental transport properties of the QGP, probe the nature of the interactions between constituents of the QGP, elucidate the origin of the strong coupling, and provide insight on the initial state of nuclear collisions. The heavy ion program through the third long shutdown should provide one inverse nb of 5.5~TeV Pb+Pb data. That data will provide more than an order of magnitude increase in statistics over currently available data for high-pT observables such as gamma-jet and Z-jet pairs. However, potentia...

  7. The monitoring system of the ATLAS muon spectrometer read out driver

    CERN Document Server

    Capasso, Luciano

    My PhD work focuses upon the Read Out Driver (ROD) of the ATLAS Muon Spectrometer. The ROD is a VME64x board, designed around two Xilinx Virtex-II FPGAs and an ARM7 microcontroller and it is located off-detector, in a counting room of the ATLAS cavern at the CERN. The readout data of the ATLAS’ RPC Muon spectrometer are collected by the front-end electronics and transferred via optical fibres to the ROD boards in the counting room. The ROD arranges all the data fragments of a sector of the spectrometer in a unique event. This is made by the Event Builder Logic, a cluster of Finite State Machines that parses the fragments, checks their syntax and builds an event containing all the sector data. In the presentation I will describe the Builder Monitor, developed by me in order to analyze the Event Builder timing performance. It is designed around a 32-bit soft-core microprocessor, embedded in the same FPGA hosting the Builder logic. This approach makes it possible to track the algorithm execution in the field. ...

  8. Operational experience with the ATLAS Pixel Detector

    CERN Document Server

    Ince, T; The ATLAS collaboration

    2012-01-01

    The ATLAS Pixel Detector is the innermost element of the ATLAS experiment at the Large Hadron Collider at CERN, providing high-resolution measurements of charged particle tracks in the high radiation environment close to the collision region. This capability is vital for the identification and measurement of proper decay times of long-lived particles such as b-hadrons, and thus vital for the ATLAS physics program. The detector provides hermetic coverage with three cylindrical layers and three layers of forward and backward pixel detectors. It consists of approximately 80 million pixels that are individually read out via chips bump-bonded to 1744 n-in-n silicon substrates. In this paper, results from the successful operation of the Pixel Detector at the LHC will be presented, including monitoring, calibration procedures, timing optimization and detector performance. The detector performance is excellent: 96.2% of the pixels are operational, noise occupancy and hit efficiency exceed the design specification, an...

  9. Operational experience of the ATLAS Pixel detector

    CERN Document Server

    Hirschbuehl, D; The ATLAS collaboration

    2011-01-01

    The ATLAS Pixel Detector is the innermost detector of the ATLAS experiment at the Large Hadron Collider at CERN, providing high-resolution measurements of charged particle tracks in the high radiation environment close to the collision region. This capability is vital for the identification and measurement of proper decay times of long-lived particles such as b-hadrons, and thus vital for the ATLAS physics program. The detector provides hermetic coverage with three cylindrical layers and three layers of forward and backward pixel detectors. It consists of approximately 80 million pixels that are individually read out via chips bump-bonded to 1744 n-in-n silicon substrates. In this talk, results from the successful operation of the Pixel Detector at the LHC will be presented, including monitoring, calibration procedures, timing optimization and detector performance. The detector performance is excellent: 97,5% of the pixels are operational, noise occupancy and hit efficiency exceed the design specification, an...

  10. Operational experience of the ATLAS Pixel Detector

    CERN Document Server

    Marcisovsky, M; The ATLAS collaboration

    2011-01-01

    The ATLAS Pixel Detector is the innermost detector of the ATLAS experiment at the Large Hadron Collider at CERN, providing high-resolution measurements of charged particle tracks in the high radiation environment close to the collision region. This capability is vital for the identification and measurement of proper decay times of long-lived particles such as b-hadrons, and thus vital for the ATLAS physics program. The detector provides hermetic coverage with three cylindrical layers and three layers of forward and backward pixel detectors. It consists of approximately 80 million pixels that are individually read out via chips bump-bonded to 1744 n-in-n silicon substrates. In this talk, results from the successful operation of the Pixel Detector at the LHC will be presented, including monitoring, calibration procedures, timing optimization and detector performance. The detector performance is excellent: 97,5% of the pixels are operational, noise occupancy and hit efficiency exceed the design specification, an...

  11. Report to users of Atlas

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1996-06-01

    This report contains the following topics: Status of the ATLAS Accelerator; Highlights of Recent Research at ATLAS; Program Advisory Committee; ATLAS User Group Executive Committee; FMA Information Available On The World Wide Web; Conference on Nuclear Structure at the Limits; and Workshop on Experiments with Gammasphere at ATLAS

  12. Data Quality Monitoring Display for ATLAS experiment

    CERN Document Server

    Ilchenko, Y; The ATLAS collaboration; Corso-Radu, A; Hadavand, H; Kolos, S; Slagle, K; Taffard, A

    2009-01-01

    The start of collisions at the LHC brings with it much excitement and many unknowns. It’s essential at this point in the experiment to be prepared with user-friendly tools to quickly and efficiently determine the quality of the data. Easy visualization of data for the shift crew and experts is one of the key factors in the data quality assessment process. The Data Quality Monitoring Display (DQMD) is a visualization tool for the automatic data quality assessment of the ATLAS experiment. It is the interface through which the shift crew and experts can validate the quality of the data being recorded or processed, be warned of problems related to data quality, and identify the origin of such problems. This tool allows great flexibility for visualization of results from automatic histogram checking through custom algorithms, the configuration used to run the algorithms, and histograms used for the check, with an overlay of reference histograms when applicable. The display also supports visualization of the resu...

  13. The ATLAS Wide-Range Database & Application Monitoring

    CERN Document Server

    Vasileva, Petya Tsvetanova; The ATLAS collaboration

    2018-01-01

    In HEP experiments at LHC the database applications often become complex by reflecting the ever demanding requirements of the researchers. The ATLAS experiment has several Oracle DB clusters with over 216 database schemes each with its own set of database objects. To effectively monitor them, we designed a modern and portable application with exceptionally good characteristics. Some of them include: concise view of the most important DB metrics; top SQL statements based on CPU, executions, block reads, etc.; volume growth plots per schema and DB object type; database jobs section with signaling for problematic ones; in-depth analysis in case of contention on data or processes. This contribution describes also the technical aspects of the implementation. The project can be separated into three independent layers. The first layer consists in highly-optimized database objects hiding all complicated calculations. The second layer represents a server providing REST access to the underlying database backend. The th...

  14. The ATLAS beam pick-up based timing system

    International Nuclear Information System (INIS)

    Ohm, C.; Pauly, T.

    2010-01-01

    The ATLAS BPTX stations are composed of electrostatic button pick-up detectors, located 175 m away along the beam pipe on both sides of ATLAS. The pick-ups are installed as a part of the LHC beam instrumentation and used by ATLAS for timing purposes. The usage of the BPTX signals in ATLAS is twofold: they are used both in the trigger system and for LHC beam monitoring. The BPTX signals are discriminated with a constant-fraction discriminator to provide a Level-1 trigger when a bunch passes through ATLAS. Furthermore, the BPTX detectors are used by a stand-alone monitoring system for the LHC bunches and timing signals. The BPTX monitoring system measures the phase between collisions and clock with a precision better than 100 ps in order to guarantee a stable phase relationship for optimal signal sampling in the sub-detector front-end electronics. In addition to monitoring this phase, the properties of the individual bunches are measured and the structure of the beams is determined. On September 10, 2008, the first LHC beams reached the ATLAS experiment. During this period with beam, the ATLAS BPTX system was used extensively to time in the read-out of the sub-detectors. In this paper, we present the performance of the BPTX system and its measurements of the first LHC beams.

  15. Exploring JavaScript and ROOT technologies to create Web-based ATLAS analysis and monitoring tools

    CERN Document Server

    Sanchez, Arturo; The ATLAS collaboration

    2015-01-01

    We explore the potentialities of current web applications to create online interfaces that allow the visualization, interaction and real physics cut-based analysis and monitoring of processes trough a web browser. The project consists in the initial development of web-based and cloud computing services to allow students and researches to perform fast and very useful cut-based analysis on a browser, reading and using real data and official Monte-Carlo simulations stored in ATLAS computing facilities. Several tools are considered: ROOT, JavaScript and HTML. Our study case is the current cut-based $H \\rightarrow ZZ \\rightarrow llqq$ analysis of the ATLAS experiment. Preliminary but satisfactory results have been obtained online.

  16. Exploring JavaScript and ROOT technologies to create Web-based ATLAS analysis and monitoring tools

    CERN Document Server

    Pineda, A S

    2015-01-01

    We explore the potential of current web applications to create online interfaces that allow the visualization, interaction and real cut-based physics analysis and monitoring of processes through a web browser. The project consists in the initial development of web- based and cloud computing services to allow students and researchers to perform fast and very useful cut-based analysis on a browser, reading and using real data and official Monte- Carlo simulations stored in ATLAS computing facilities. Several tools are considered: ROOT, JavaScript and HTML. Our study case is the current cut-based H → ZZ → llqq analysis of the ATLAS experiment. Preliminary but satisfactory results have been obtained online.

  17. Networks in ATLAS

    Science.gov (United States)

    McKee, Shawn; ATLAS Collaboration

    2017-10-01

    Networks have played a critical role in high-energy physics (HEP), enabling us to access and effectively utilize globally distributed resources to meet the needs of our physicists. Because of their importance in enabling our grid computing infrastructure many physicists have taken leading roles in research and education (R&E) networking, participating in, and even convening, network related meetings and research programs with the broader networking community worldwide. This has led to HEP benefiting from excellent global networking capabilities for little to no direct cost. However, as other science domains ramp-up their need for similar networking it becomes less clear that this situation will continue unchanged. What this means for ATLAS in particular needs to be understood. ATLAS has evolved its computing model since the LHC started based upon its experience with using globally distributed resources. The most significant theme of those changes has been increased reliance upon, and use of, its networks. We will report on a number of networking initiatives in ATLAS including participation in the global perfSONAR network monitoring and measuring efforts of WLCG and OSG, the collaboration with the LHCOPN/LHCONE effort, the integration of network awareness into PanDA, the use of the evolving ATLAS analytics framework to better understand our networks and the changes in our DDM system to allow remote access to data. We will also discuss new efforts underway that are exploring the inclusion and use of software defined networks (SDN) and how ATLAS might benefit from: • Orchestration and optimization of distributed data access and data movement. • Better control of workflows, end to end. • Enabling prioritization of time-critical vs normal tasks • Improvements in the efficiency of resource usage

  18. Occupational monitoring program

    International Nuclear Information System (INIS)

    Sordi, G.-M.A.A.

    1988-10-01

    After to give the principal aim of a monitoring program it gives the philosophy in force in our country and the new one, international. It shows the different monitoring types and the classification related to their functions. The functions are deal with, separately, for workplace and individual monitoring. It shows, also, that the individual monitoring can be used to assess the workplace conditions. It discusses the models that can be introduced to assess the quantities used in the results interpretation from the quantities used in the measurements. It gives an example. Finally it discusses the supplementary functions of monitoring as such reassessment of monitoring programs, selection of the controlled areas and the extent form of medical supervision. (author) [pt

  19. ATLAS trigger operations: Monitoring with “Xmon” rate prediction system

    CERN Document Server

    Aukerman, Andrew Todd; The ATLAS collaboration

    2017-01-01

    We present the operations and online monitoring with the “Xmon” rate prediction system for the trigger system at the ATLAS Experiment. A two-level trigger system reduces the LHC’s bunch-crossing rate, 40 MHz at design capacity, to an average recording rate of about 1 kHz, while maintaining a high efficiency of selecting events of interest. The Xmon system uses the luminosity value to predict trigger rates that are, in turn, compared with incoming rates. The predictions rely on past runs to parameterize the luminosity dependency of the event rate for a trigger algorithm. Some examples are given to illustrate the performance of the tool during recent operations.

  20. Exploring JavaScript and ROOT technologies to create Web-based ATLAS analysis and monitoring tools

    CERN Document Server

    Sanchez, Arturo; The ATLAS collaboration

    2015-01-01

    We explore the potentialities of current web applications to create online interfaces that allow the visualization, interaction and real physics cut-based analysis and monitoring of processes trough a web browser. The project consists in the initial development of web-based and cloud computing services to allow students and researches to perform fast and very useful cut-based analysis on a browser, reading and using real data and official Monte-Carlo simulations stored in ATLAS computing facilities. Several tools are considered: ROOT, JavaScript and HTML. Our study case is the current cut-based H->ZZ->llqq analysis of the ATLAS experiment. Preliminary but satisfactory results have been obtained online; this presentation describes the tests and plans and future upgrades.

  1. Data Federation Strategies for ATLAS using XRootD

    CERN Document Server

    Gardner, R; The ATLAS collaboration; Duckeck, G; Elmsheuser, J; Hanushevski, A; Hönig, F; Iven, J; Legger, F; Vukotic, I; Yang, W

    2014-01-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the w...

  2. Data Federation Strategies for ATLAS using XRootD

    CERN Document Server

    Gardner, R; The ATLAS collaboration; Duckeck, G; Elmsheuser, J; Hanushevski, A; Hönig, F; Iven, J; Legger, F; Vukotic, I; Yang, W

    2013-01-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the w...

  3. Meteorological Monitoring Program

    International Nuclear Information System (INIS)

    Hancock, H.A. Jr.; Parker, M.J.; Addis, R.P.

    1994-01-01

    The purpose of this technical report is to provide a comprehensive, detailed overview of the meteorological monitoring program at the Savannah River Site (SRS) near Aiken, South Carolina. The principle function of the program is to provide current, accurate meteorological data as input for calculating the transport and diffusion of any unplanned release of an atmospheric pollutant. The report is recommended for meteorologists, technicians, or any personnel who require an in-depth understanding of the meteorological monitoring program

  4. Meteorological Monitoring Program

    Energy Technology Data Exchange (ETDEWEB)

    Hancock, H.A. Jr. [ed.; Parker, M.J.; Addis, R.P.

    1994-09-01

    The purpose of this technical report is to provide a comprehensive, detailed overview of the meteorological monitoring program at the Savannah River Site (SRS) near Aiken, South Carolina. The principle function of the program is to provide current, accurate meteorological data as input for calculating the transport and diffusion of any unplanned release of an atmospheric pollutant. The report is recommended for meteorologists, technicians, or any personnel who require an in-depth understanding of the meteorological monitoring program.

  5. Wind Energy Resource Atlas of Mongolia

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, D; Schwartz, M; Scott, G.; Haymes, S.; Heimiller, D.; George, R.

    2001-08-27

    The United States Department of Energy (DOE) and the United States Agency for International Development (USAID) sponsored a project to help accelerate the large-scale use of wind energy technologies in Mongolia through the development of a wind energy resource atlas of Mongolia. DOE's National Renewable Energy Laboratory (NREL) administered and conducted this project in collaboration with USAID and Mongolia. The Mongolian organizations participating in this project were the Scientific, Production, and Trade Corporation for Renewable Energy (REC) and the Institute of Meteorology and Hydrology (IMH). The primary goals of the project were to develop detailed wind resource maps for all regions of Mongolia for a comprehensive wind resource atlas, and to establish a wind-monitoring program to identify prospective sites for wind energy projects and help validate some of the wind resource estimates.

  6. Large-Scale Production of Monitored Drift Tube Chambers for the ATLAS Muon Spectrometer

    CERN Document Server

    Bauer, F.; Kortner, O; Kroha, H; Manz, A; Mohrdieck, S; Richter, R; Zhuravlov, V

    2016-01-01

    Precision drift tube chambers with a sense wire positioning accuracy of better than 20 microns are under construction for the ATLAS muon spectrometer. 70% of the 88 large chambers for the outermost layer of the central part of the spectrometer have been assembled. Measurements during chamber construction of the positions of the sense wires and of the sensors for the optical alignment monitoring system demonstrate that the requirements for the mechanical precision of the chambers are fulfilled.

  7. Evolution of the ATLAS Distributed Computing during the LHC long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2013-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  8. LUCID A Cherenkov Tube Based Detector for Monitoring the ATLAS Experiment Luminosity

    CERN Document Server

    Sbrizzi, A

    2007-01-01

    The LUCID (LUminosity Cherenkov Integrating Detector) apparatus is composed by two symmetric arms deployed at about 17 m from the ATLAS interaction point. The purpose of this detector, which will be installed in january 2008, is to monitor the luminosity delivered by the LHC machine to the ATLAS experiment. An absolute luminosity calibration is needed and it will be provided by a Roman Pot type detector with the two arms placed at about 240 m from the interaction point. Each arm of the LUCID detector is based on an aluminum vessel containing 20 Cherenkov tubes, 15 mm diameter and 1500 mm length, filled with C4F10 radiator gas at 1.5 bar. The Cherenkov light generated by charged particles above the threshold is collected by photomultiplier tubes (PMT) directly placed at the tubes end. The challenging aspect of this detector is its readout in an environment characterized by the high dose of radiation (about 0.7 Mrad/year at 10^33cm^2 s^-1) it must withstand. In order to fulfill these radiation hardness requirem...

  9. Monitoring the injured brain: registered, patient specific atlas models to improve accuracy of recovered brain saturation values

    Science.gov (United States)

    Clancy, Michael; Belli, Antonio; Davies, David; Lucas, Samuel J. E.; Su, Zhangjie; Dehghani, Hamid

    2015-07-01

    The subject of superficial contamination and signal origins remains a widely debated topic in the field of Near Infrared Spectroscopy (NIRS), yet the concept of using the technology to monitor an injured brain, in a clinical setting, poses additional challenges concerning the quantitative accuracy of recovered parameters. Using high density diffuse optical tomography probes, quantitatively accurate parameters from different layers (skin, bone and brain) can be recovered from subject specific reconstruction models. This study assesses the use of registered atlas models for situations where subject specific models are not available. Data simulated from subject specific models were reconstructed using the 8 registered atlas models implementing a regional (layered) parameter recovery in NIRFAST. A 3-region recovery based on the atlas model yielded recovered brain saturation values which were accurate to within 4.6% (percentage error) of the simulated values, validating the technique. The recovered saturations in the superficial regions were not quantitatively accurate. These findings highlight differences in superficial (skin and bone) layer thickness between the subject and atlas models. This layer thickness mismatch was propagated through the reconstruction process decreasing the parameter accuracy.

  10. Integrating Networking into ATLAS

    CERN Document Server

    Mc Kee, Shawn Patrick; The ATLAS collaboration

    2018-01-01

    Networking is foundational to the ATLAS distributed infrastructure and there are many ongoing activities related to networking both within and outside of ATLAS. We will report on the progress in a number of areas exploring ATLAS's use of networking and our ability to monitor the network, analyze metrics from the network, and tune and optimize application and end-host parameters to make the most effective use of the network. Specific topics will include work on Open vSwitch for production systems, network analytics, FTS testing and tuning, and network problem alerting and alarming.

  11. Remote Monitoring Transparency Program

    International Nuclear Information System (INIS)

    Sukhoruchkin, V.K.; Shmelev, V.M.; Roumiantsev, A.N.

    1996-01-01

    The objective of the Remote Monitoring Transparency Program is to evaluate and demonstrate the use of remote monitoring technologies to advance nonproliferation and transparency efforts that are currently being developed by Russia and the United States without compromising the national security to the participating parties. Under a lab-to-lab transparency contract between Sandia National Laboratories (SNL) and the Kurchatov Institute (KI RRC), the Kurchatov Institute will analyze technical and procedural aspects of the application of remote monitoring as a transparency measure to monitor inventories of direct- use HEU and plutonium (e.g., material recovered from dismantled nuclear weapons). A goal of this program is to assist a broad range of political and technical experts in learning more about remote monitoring technologies that could be used to implement nonproliferation, arms control, and other security and confidence building measures. Specifically, this program will: (1) begin integrating Russian technologies into remote monitoring systems; (2) develop remote monitoring procedures that will assist in the application of remote monitoring techniques to monitor inventories of HEU and Pu from dismantled nuclear weapons; and (3) conduct a workshop to review remote monitoring fundamentals, demonstrate an integrated US/Russian remote monitoring system, and discuss the impacts that remote monitoring will have on the national security of participating countries

  12. Automating ATLAS Computing Operations using the Site Status Board

    CERN Document Server

    Andreeva, J; The ATLAS collaboration; Campana, S; Di Girolamo, A; Espinal Curull, X; Gayazov, S; Magradze, E; Nowotka, MM; Rinaldi, L; Saiz, P; Schovancova, J; Stewart, GA; Wright, M

    2012-01-01

    The automation of operations is essential to reduce manpower costs and improve the reliability of the system. The Site Status Board (SSB) is a framework which allows Virtual Organizations to monitor their computing activities at distributed sites and to evaluate site performance. The ATLAS experiment intensively uses SSB for the distributed computing shifts, for estimating data processing and data transfer efficiencies at a particular site, and for implementing automatic exclusion of sites from computing activities, in case of potential problems. ATLAS SSB provides a real-time aggregated monitoring view and keeps the history of the monitoring metrics. Based on this history, usability of a site from the perspective of ATLAS is calculated. The presentation will describe how SSB is integrated in the ATLAS operations and computing infrastructure and will cover implementation details of the ATLAS SSB sensors and alarm system, based on the information in SSB. It will demonstrate the positive impact of the use of SS...

  13. Development of a time-to-digital converter ASIC for the upgrade of the ATLAS Monitored Drift Tube detector

    Science.gov (United States)

    Wang, Jinhong; Liang, Yu; Xiao, Xiong; An, Qi; Chapman, John W.; Dai, Tiesheng; Zhou, Bing; Zhu, Junjie; Zhao, Lei

    2018-02-01

    The upgrade of the ATLAS muon spectrometer for the high-luminosity LHC requires new trigger and readout electronics for various elements of the detector. We present the design of a time-to-digital converter (TDC) ASIC prototype for the ATLAS Monitored Drift Tube (MDT) detector. The chip was fabricated in a GlobalFoundries 130 nm CMOS technology. Studies indicate that its timing and power dissipation characteristics meet the design specifications, with a timing bin variation of ±40 ps for all 48 TDC slices and a power dissipation of about 6.5 mW per slice.

  14. Puna Geothermal Venture Hydrologic Monitoring Program

    Energy Technology Data Exchange (ETDEWEB)

    None

    1990-04-01

    This document provides the basis for the Hydrologic Monitoring Program (HMP) for the Puna Geothermal Venture. The HMP is complementary to two additional environmental compliance monitoring programs also being submitted by Puma Geothermal Venture (PGV) for their proposed activities at the site. The other two programs are the Meteorology and Air Quality Monitoring Program (MAQMP) and the Noise Monitoring Program (NMP), being submitted concurrently.

  15. Development of noSQL data storage for the ATLAS PanDA Monitoring System

    International Nuclear Information System (INIS)

    Potekhin, M

    2012-01-01

    For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with a R and D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic loads.

  16. Evolution of the ATLAS Distributed Computing system during the LHC Long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2014-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  17. Atlas C++ Coding Standard Specification

    CERN Document Server

    Albrand, S; Barberis, D; Bosman, M; Jones, B; Stavrianakou, M; Arnault, C; Candlin, D; Candlin, R; Franck, E; Hansl-Kozanecka, Traudl; Malon, D; Qian, S; Quarrie, D; Schaffer, R D

    2001-01-01

    This document defines the ATLAS C++ coding standard, that should be adhered to when writing C++ code. It has been adapted from the original "PST Coding Standard" document (http://pst.cern.ch/HandBookWorkBook/Handbook/Programming/programming.html) CERN-UCO/1999/207. The "ATLAS standard" comprises modifications, further justification and examples for some of the rules in the original PST document. All changes were discussed in the ATLAS Offline Software Quality Control Group and feedback from the collaboration was taken into account in the "current" version.

  18. High-Performance Scalable Information Service for the ATLAS Experiment

    International Nuclear Information System (INIS)

    Kolos, S; Boutsioukis, G; Hauser, R

    2012-01-01

    The ATLAS[1] experiment is operated by a highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ)[2] project. The IS provides a high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about a hundred gigabytes of information which is being constantly updated with the update interval varying from a second to a few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In the latter case IS subscribers receive information within a few milliseconds after it was updated. IS can handle arbitrary types of information, including histograms produced by the HLT applications, and provides C++, Java and Python API. The Information Service is a unique source of information for the majority of the online monitoring analysis and GUI applications used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with a latency of a few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Each information

  19. ATLAS-AWS

    International Nuclear Information System (INIS)

    Gehrcke, Jan-Philip; Stonjek, Stefan; Kluth, Stefan

    2010-01-01

    We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image on the Amazon Simple Storage Service (S3) and can be quickly retrieved and connected to new SL4 AMI instances using the Amazon Elastic Block Store (EBS). ATLAS jobs can then configure against the release kit using the ATLAS configuration management tool (cmt) in the standard way. The output of jobs is exported to S3 before the SL4 AMI is terminated. Job status information is transferred to the Amazon SimpleDB service. The whole process of launching instances of our AMI, starting, monitoring and stopping jobs and retrieving job output from S3 is controlled from a client machine using python scripts implementing the Amazon EC2/S3 API via the boto library working together with small scripts embedded in the SL4 AMI. We report our experience with setting up and operating the system using standard ATLAS job transforms.

  20. Monitoring and controlling ATLAS data management: The Rucio web user interface

    CERN Document Server

    Lassnig, Mario; The ATLAS collaboration; Barisits, Martin-Stefan; Serfon, Cedric; Vigne, Ralph; Garonne, Vincent

    2015-01-01

    The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component, and allows easy integration for user-generated views. The interface follows three design principles. First, the collection and storage of data from internal and external systems is asynchronous to reduce latency. This includes the use of technologies like ActiveMQ or Nagios. Second, analysis of the data into information is done massively parallel due to its volume, using a combined approach with an Oracle database and Hadoop MapReduce. Third, sharing of the information does not distinguish between human or programmatic access, making it easy to access selective parts of the information both in constrained frontends like ...

  1. Monitoring and controlling ATLAS data management: The Rucio web user interface

    CERN Document Server

    Lassnig, Mario; The ATLAS collaboration; Vigne, Ralph; Barisits, Martin-Stefan; Garonne, Vincent; Serfon, Cedric

    2015-01-01

    The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new data management system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component, and allows easy integration for user-generated views. The interface follows three design principles. First, the collection and storage of data from internal and external systems is asynchronous to reduce latency. This includes the use of technologies like ActiveMQ or Nagios. Second, analysis of the data into information is done massively parallel due to its volume, using a combined approach with an Oracle database and Hadoop MapReduce. Third, sharing of the information does not distinguish between human or programmatic access, making it easy to access selective parts of the information both in constrained...

  2. 1988 Monitoring Activities Review (MAR) of the environmental monitoring program

    International Nuclear Information System (INIS)

    1989-03-01

    The EGandG Idaho Environmental Monitoring (EM) Unit is responsible for coordinating and conducting environmental measurements of radioactive and hazardous contaminants around facilities operated by EGandG Idaho. The EM Unit has several broad program objectives, which include complying with regulatory standards and developing a basis for estimating future impacts of operations at EGandG Idaho facilities. To improve program planning and to provide bases for technical improvement of the monitoring program, the EGandG Environmental Monitoring organization has regularly used the Monitoring Activities Review (MAR) process since 1982. Each MAR is conducted by a committee of individuals selected for their experience in the various types of monitoring performed by the EM organization. Previous MAR studies have focused on procedures for all currently monitored media except biota. Biotic monitoring was initiated following the last MAR. This report focuses on all currently monitored media, and includes the first review of biotic monitoring. The review of biotic monitoring has been conducted at a level of detail consistent with initial MAR reports for other parts of the Waste Management Program Facilities Environmental Monitoring Program. The review of the biotic monitoring activities is presented in Section 5.5 of this report. 21 refs., 7 figs., 4 tabs

  3. ATLAS Tile Calorimeter Readout Electronics Upgrade Program for the High Luminosity LHC

    CERN Document Server

    Cerqueira, A S

    2013-01-01

    The Tile Calorimeter (TileCal) is the hadronic calorimeter covering the most central region of the ATLAS experiment at LHC. The TileCal readout consists of about 10000 channels. The ATLAS upgrade program is divided in three phases: The Phase~0 occurs during 2013-2014, Phase~1 during 2018-1019 and finally Phase~2, which is foreseen for 2022-2023, whereafter the peak luminosity will reach 5-7 x 10$^{34}$ cm$^2$s$^{-1}$ (HL-LHC). The main TileCal upgrade is focused on the Phase~2 period. The upgrade aims at replacing the majority of the on- and off-detector electronics so that all calorimeter signals are directly digitized and sent to the off-detector electronics in the counting room. All new electronics must be able to cope with the increased radiation levels. An ambitious upgrade development program is pursued to study different electronics options. Three options are presently being investigated for the front-end electronic upgrade. The first option is an improved version of the present system built using comm...

  4. The monitoring and calibration Web system of the ATLAS hadronic calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Maidantchik, Carmen; Gomes, Andressa Andrea Sivollela; Marroquim, Fernando [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil)

    2011-07-01

    Full text: The scintillator tiles hadronic calorimeter (TileCal) of the ATLAS detector measures the energy of resultant particles in a collision. The calorimetry system was designed to absorb the energy of the particles that crosses the detector and is composed by three barrels, each one equally divided into 64 modules. The ionizing particles that cross the tiles induce the production of light, which intensity is proportional to the energy deposited by the fragment. The produced light propagates through the tiles towards the edges, where it is absorbed and displaced until reaching the photomultiplier tubes (PMTs), also known as electronic reading channels. Each module combines till 45 PMTs. For each run, the reconstruction process starts with a data analysis that can comprises different levels of information granularity till arriving to the PMTs level. Following this phase, the Data Quality Monitoring Framework (DQMF) system automatically generates quality indicators associated to the channels. Depending on the configuration that is registered in the DQMF, the channel status can be automatically defined as good, affected or bad. The status of each module is defined by the percentage of existing good, affected or bad channels. At this point, the analysis of modules allows the identification of the ones that are problematic by the examination of graphics that are automatically generated during the data reconstruction stage. Then, an analysis of a module performance by a time period that encompasses different types of runs is performed. In this last step, the list of problematic channels can be modified through the insertion or exclusion of PTMs, as in the case when a channel is substituted. Additionally, during the whole calorimeter operation, it is fundamental to identify the electronic channels that are active, dead (nor working), noisy and the ones that presents saturation in the signal digitalisation process. The Monitoring and Calibration Web System (MCWS) was

  5. The monitoring and calibration Web system of the ATLAS hadronic calorimeter

    International Nuclear Information System (INIS)

    Maidantchik, Carmen; Gomes, Andressa Andrea Sivollela; Marroquim, Fernando

    2011-01-01

    Full text: The scintillator tiles hadronic calorimeter (TileCal) of the ATLAS detector measures the energy of resultant particles in a collision. The calorimetry system was designed to absorb the energy of the particles that crosses the detector and is composed by three barrels, each one equally divided into 64 modules. The ionizing particles that cross the tiles induce the production of light, which intensity is proportional to the energy deposited by the fragment. The produced light propagates through the tiles towards the edges, where it is absorbed and displaced until reaching the photomultiplier tubes (PMTs), also known as electronic reading channels. Each module combines till 45 PMTs. For each run, the reconstruction process starts with a data analysis that can comprises different levels of information granularity till arriving to the PMTs level. Following this phase, the Data Quality Monitoring Framework (DQMF) system automatically generates quality indicators associated to the channels. Depending on the configuration that is registered in the DQMF, the channel status can be automatically defined as good, affected or bad. The status of each module is defined by the percentage of existing good, affected or bad channels. At this point, the analysis of modules allows the identification of the ones that are problematic by the examination of graphics that are automatically generated during the data reconstruction stage. Then, an analysis of a module performance by a time period that encompasses different types of runs is performed. In this last step, the list of problematic channels can be modified through the insertion or exclusion of PTMs, as in the case when a channel is substituted. Additionally, during the whole calorimeter operation, it is fundamental to identify the electronic channels that are active, dead (nor working), noisy and the ones that presents saturation in the signal digitalisation process. The Monitoring and Calibration Web System (MCWS) was

  6. Report to users of ATLAS

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1997-03-01

    This report covers the following topics: (1) status of the ATLAS accelerator; (2) progress in R and D towards a proposal for a National ISOL Facility; (3) highlights of recent research at ATLAS; (4) the move of gammasphere from LBNL to ANL; (5) Accelerator Target Development laboratory; (6) Program Advisory Committee; (7) ATLAS User Group Executive Committee; and (8) ATLAS user handbook available in the World Wide Web. A brief summary is given for each topic

  7. The Monitoring and Calibration Web Systems for the ATLAS Tile Calorimeter Data Quality Analysis

    CERN Document Server

    Sivolella, A; The ATLAS collaboration; Ferreira, F

    2012-01-01

    The Tile Calorimeter (TileCal), one of the ATLAS detectors, has four partitions, where each one contains 64 modules and each module has up to 48 PhotoMulTipliers (PMTs), totalizing more than 10,000 electronic channels. The Monitoring and Calibration Web System (MCWS) supports data quality analyses at channels level. This application was developed to assess the detector status and verify its performance, presenting the problematic known channels list from the official database that stores the detector conditions data (COOL). The bad channels list guides the data quality validator during analyses in order to identify new problematic channels. Through the system, it is also possible to update the channels list directly in the COOL database. MCWS generates results, as eta-phi plots and comparative tables with masked channels percentage, which concerns TileCal status, and it is accessible by all ATLAS collaboration. Annually, there is an intervention on LHC (Large Hadronic Collider) when the detector equipments (P...

  8. ATLAS Muon Drift Tube Electronics

    CERN Document Server

    Arai, Y; Beretta, M; Boterenbrood, H; Brandenburg, G W; Ceradini, F; Chapman, J W; Dai, T; Ferretti, C; Fries, T; Gregory, J; Guimarães da Costa, J; Harder, S; Hazen, E; Huth, J; Jansweijer, P P M; Kirsch, L E; König, A C; Lanza, A; Mikenberg, G; Oliver, J; Posch, C; Richter, R; Riegler, W; Spiriti, E; Taylor, F E; Vermeulen, J; Wadsworth, B; Wijnen, T A M

    2008-01-01

    This paper describes the electronics used for the ATLAS monitored drift tube (MDT) chambers. These chambers are the main component of the precision tracking system in the ATLAS muon spectrometer. The MDT detector system consists of 1,150 chambers containing a total of 354,000 drift tubes. It is capable of measuring the sagitta of muon tracks to an accuracy of 60 microns, which corresponds to a momentum accuracy of about 10% at pT = 1 TeV. The design and performance of the MDT readout electronics as well as the electronics for controlling, monitoring and powering the detector will be discussed. These electronics have been extensively tested under simulated running conditions and have undergone radiation testing certifying them for more than 10 years of LHC operation. They are now installed on the ATLAS detector and are operating during cosmic ray commissioning runs.

  9. Data federation strategies for ATLAS using XRootD

    Science.gov (United States)

    Gardner, Robert; Campana, Simone; Duckeck, Guenter; Elmsheuser, Johannes; Hanushevsky, Andrew; Hönig, Friedrich G.; Iven, Jan; Legger, Federica; Vukotic, Ilija; Yang, Wei; Atlas Collaboration

    2014-06-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the wide area network and staging of remote data files to local disk. To support job-brokering decisions, a time-dependent cost-of-data-access matrix is made taking into account network performance and key site performance factors. The system's response to production-scale physics analysis workloads, either from individual end-users or ATLAS analysis services, is discussed.

  10. Active sites environmental monitoring Program - Program Plan: Revision 2

    International Nuclear Information System (INIS)

    Morrissey, C.M.; Hicks, D.S.; Ashwood, T.L.; Cunningham, G.R.

    1994-05-01

    The Active Sites Environmental Monitoring Program (ASEMP), initiated in 1989, provides early detection and performance monitoring of active low-level-waste (LLW) and transuranic (TRU) waste facilities at Oak Ridge National Laboratory (ORNL). Several changes have recently occurred in regard to the sites that are currently used for waste storage and disposal. These changes require a second set of revisions to the ASEMP program plan. This document incorporates those revisions. This program plan presents the organization and procedures for monitoring the active sites. The program plan also provides internal reporting levels to guide the evaluation of monitoring results

  11. EnviroAtlas - Ecosystem Services Market-Based Programs Web Service, U.S., 2016, Forest Trends' Ecosystem Marketplace

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service contains layers depicting market-based programs and projects addressing ecosystem services protection in the United States. Layers...

  12. Operational Experience with the ATLAS Pixel Detector at the LHC

    CERN Document Server

    Keil, M; The ATLAS collaboration

    2011-01-01

    The ATLAS Pixel Detector is the innermost detector of the ATLAS experiment at the Large Hadron Collider at CERN, providing high-resolution measurements of charged particle tracks in the high radiation environment close to the collision region. This capability is vital for the identification and measurement of proper decay times of long-lived particles such as b-hadrons, and thus vital for the ATLAS physics program. The detector provides hermetic coverage with three cylindrical layers and three layers of forward and backward pixel detectors. It consists of approximately 80 million pixels that are individually read out via chips bump-bonded to 1744 n-in-n silicon substrates. In this talk, results from the successful operation of the Pixel Detector at the LHC will be presented, including monitoring, calibration procedures, timing optimization and detector performance. The detector performance is excellent: 97,5% of the pixels are operational, noise occupancy and hit efficiency exceed the design specification, an...

  13. Operational experience with the ATLAS Pixel Detector at the LHC

    CERN Document Server

    Hirschbuehl, D; The ATLAS collaboration

    2011-01-01

    The ATLAS Pixel Detector is the innermost detector of the ATLAS experiment at the Large Hadron Collider at CERN, providing high-resolution measurements of charged particle tracks in the high radiation environment close to the collision region. This capability is vital for the identification and measurement of proper decay times of long-lived particles such as b-hadrons, and thus vital for the ATLAS physics program. The detector provides hermetic coverage with three cylindrical layers and three layers of forward and backward pixel detectors. It consists of approximately 80 million pixels that are individually read out via chips bump-bonded to 1744 n-in-n silicon substrates. In this paper results from the successful operation of the Pixel Detector at the LHC will be presented, including monitoring, calibration procedures, timing optimization and detector performance. The detector performance is excellent: 96.7% of the pixels are operational, noise occupancy and hit efficiency exceed the design specification, an...

  14. Operational experience with the ATLAS Pixel Detector at the LHC

    CERN Document Server

    Lapoire, C; The ATLAS collaboration

    2011-01-01

    The ATLAS Pixel Detector is the innermost detector of the ATLAS experiment at the Large Hadron Collider at CERN, providing high-resolution measurements of charged particle tracks in the high radiation environment close to the collision region. This capability is vital for the identification and measurement of proper decay times of long-lived particles such as b-hadrons, and thus vital for the ATLAS physics program. The detector provides hermetic coverage with three cylindrical layers and three layers of forward and backward pixel detectors. It consists of approximately 80 million pixels that are individually read out via chips bump-bonded to 1744 n-in-n silicon substrates. In this talk, results from the successful operation of the Pixel Detector at the LHC will be presented, including monitoring, calibration procedures, timing optimization and detector performance. The detector performance is excellent: 97,5% of the pixels are operational, noise occupancy and hit efficiency exceed the design specification, an...

  15. Operational Experience with the ATLAS Pixel Detector at the LHC

    CERN Document Server

    Lapoire, C; The ATLAS collaboration

    2012-01-01

    The ATLAS Pixel Detector is the innermost detector of the ATLAS experiment at the Large Hadron Collider at CERN, providing high-resolution measurements of charged particle tracks in the high radiation environment close to the collision region. This capability is vital for the identification and measurement of proper decay times of long-lived particles such as B-hadrons, and thus vital for the ATLAS physics program. The detector provides hermetic coverage with three cylindrical layers and three layers of forward and backward pixel detectors. It consists of approximately 80 million pixels that are individually read out via chips bump-bonded to 1744 n-in-n silicon substrates. In this paper, results from the successful operation of the Pixel Detector at the LHC will be presented, including monitoring, calibration procedures and detector performance. The detector performance is excellent: 96.2% of the pixels are operational, noise occupancy and hit efficiency exceed the design specification.

  16. Operational experience with the ATLAS Pixel Detector at the LHC

    CERN Document Server

    Ince, T; The ATLAS collaboration

    2011-01-01

    The ATLAS Pixel Detector is the innermost detector of the ATLAS experiment at the Large Hadron Collider at CERN, providing high-resolution measurements of charged particle tracks in the high radiation environment close to the collision region. This capability is vital for the identification and measurement of proper decay times of long-lived particles such as b-hadrons, and thus vital for the ATLAS physics program. The detector provides hermetic coverage with three cylindrical layers and three layers of forward and backward pixel detectors. It consists of approximately 80 million pixels that are individually read out via chips bump-bonded to 1744 n-in-n silicon substrates. In this talk, results from the successful operation of the Pixel Detector at the LHC will be presented, including monitoring, calibration procedures, timing optimization and detector performance. The detector performance is excellent: 96.8% of the pixels are operational, noise occupancy and hit efficiency exceed the design specification, an...

  17. Operational experience with the ATLAS Pixel detector at the LHC

    CERN Document Server

    Deluca, C; The ATLAS collaboration

    2011-01-01

    The ATLAS Pixel Detector is the innermost detector of the ATLAS experiment at the Large Hadron Collider at CERN, providing high-resolution measurements of charged particle tracks in the high radiation environment close to the collision region. This capability is vital for the identification and measurement of proper decay times of long-lived particles such as b-hadrons, and thus vital for the ATLAS physics program. The detector provides hermetic coverage with three cylindrical layers and three layers of forward and backward pixel detectors. It consists of approximately 80 million pixels that are individually read out via chips bump-bonded to 1744 n-in-n silicon substrates. In this paper, results from the successful operation of the Pixel Detector at the LHC will be presented, including monitoring, calibration procedures, timing optimization and detector performance. The detector performance is excellent: 97,5\\% of the pixels are operational, noise occupancy and hit efficiency exceed the design specification, ...

  18. Operational Experience with the ATLAS Pixel Detector at the LHC

    CERN Document Server

    Lange, C; The ATLAS collaboration

    2011-01-01

    The ATLAS Pixel Detector is the innermost detector of the ATLAS experiment at the Large Hadron Collider at CERN, providing high-resolution measurements of charged particle tracks in the high radiation environment close to the collision region. This capability is vital for the identification and measurement of proper decay times of long-lived particles such as b-hadrons, and thus vital for the ATLAS physics program. The detector provides hermetic coverage with three cylindrical layers and three layers of forward and backward pixel detectors. It consists of approximately 80 million pixels that are individually read out via chips bump- bonded to 1744 n-in-n silicon substrates. In this talk, results from the successful operation of the Pixel Detector at the LHC will be presented, including monitoring, calibration procedures, timing optimization and detector performance. The detector performance is excellent: 97,5% of the pixels are operational, noise occupancy and hit efficiency exceed the design specification, a...

  19. Operational experience with the ATLAS Pixel detector at the LHC

    CERN Document Server

    Deluca, C; The ATLAS collaboration

    2011-01-01

    The ATLAS Pixel Detector is the innermost detector of the ATLAS experiment at the Large Hadron Collider at CERN, providing high-resolution measurements of charged particle tracks in the high radiation environment close to the collision region. This capability is vital for the identification and measurement of proper decay times of long-lived particles such as b-hadrons, and thus vital for the ATLAS physics program. The detector provides hermetic coverage with three cylindrical layers and three layers of forward and backward pixel detectors. It consists of approximately 80 million pixels that are individually read out via chips bump-bonded to 1744 n-in-n silicon substrates. In this talk, results from the successful operation of the Pixel Detector at the LHC will be presented, including monitoring, calibration procedures, timing optimization and detector performance. The detector performance is excellent: 97,5% of the pixels are operational, noise occupancy and hit efficiency exceed the design specification, an...

  20. ATLAS B-physics potential

    International Nuclear Information System (INIS)

    Smizanska, M.

    2001-01-01

    Studies since 1993 have demonstrated the ability of ATLAS to pursue a wide B physics program. This document presents the latest performance studies with special stress on lepton identification. B-decays containing several leptons in ATLAS statistically dominate the high-precision measurements. We present new results on physics simulations of CP violation measurements in the B s 0 → J/Ψphi decay and on a novel ATLAS programme on beauty production in central proton-proton collisions of LHC

  1. Integration of the monitoring and offline analysis systems of the ATLAS hadronic calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Maidantchik, Carmen; Balabram, Luiz Eduardo; Gomes, Andressa Sivollela; Ferreira, Fernando G.; Marroquim, Fernando [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil)

    2011-07-01

    Full text: During the ATLAS detector operation, collaborators perform innumerous analysis related to the calibration in order to acquire detailed information about the hadronic calorimeter (TileCal) equipment. Through the analysis, it is possible to detect faults that would affect data acquisition, which are of physics interest. Some defects examples are: saturation of reading channels, problems in the acquired signal digitization and high signal-to-noise ratio (SNR). Since the commissioning period, members of the collaboration between CERN and UFRJ developed Web systems to support the hard task of monitoring the TileCal equipment. The Tile Commissioning Web System (TCWS) integrates different applications, each one presenting part of the commissioning process. The Web Interface for Shifters (WIS) displays the most recent calibration runs and assists the monitoring of the modules operation. The TileComm Analysis (TCA) allows access to histograms that represents the status of modules and corresponding channels functioning. The Timeline provides the history of the calibration rounds and the state of all modules in chronological order. The Data Quality Monitoring (DQM) contains the status of the histograms, modules and channels. The E-log stores and displays all reports about calibrations. Web Monitoring and Calibration System (MCWS) allows the visualization of the most recent channel status of each module. DCS (Detector Control System) Web System monitors the operation of modules power supply. After the ATLAS operation has started the number of equipment calibrations increased significantly, which has prompted the development of a system that would display all previous information through a centralized way. The Dashboard allows the collaborator to easily access the latest runs or to search for specific ones. After selecting a run, it is possible to check the status of each barrel module through a schematic figure, to view the 10 latest status of a certain module, and

  2. Integration of the monitoring and offline analysis systems of the ATLAS hadronic calorimeter

    International Nuclear Information System (INIS)

    Maidantchik, Carmen; Balabram, Luiz Eduardo; Gomes, Andressa Sivollela; Ferreira, Fernando G.; Marroquim, Fernando

    2011-01-01

    Full text: During the ATLAS detector operation, collaborators perform innumerous analysis related to the calibration in order to acquire detailed information about the hadronic calorimeter (TileCal) equipment. Through the analysis, it is possible to detect faults that would affect data acquisition, which are of physics interest. Some defects examples are: saturation of reading channels, problems in the acquired signal digitization and high signal-to-noise ratio (SNR). Since the commissioning period, members of the collaboration between CERN and UFRJ developed Web systems to support the hard task of monitoring the TileCal equipment. The Tile Commissioning Web System (TCWS) integrates different applications, each one presenting part of the commissioning process. The Web Interface for Shifters (WIS) displays the most recent calibration runs and assists the monitoring of the modules operation. The TileComm Analysis (TCA) allows access to histograms that represents the status of modules and corresponding channels functioning. The Timeline provides the history of the calibration rounds and the state of all modules in chronological order. The Data Quality Monitoring (DQM) contains the status of the histograms, modules and channels. The E-log stores and displays all reports about calibrations. Web Monitoring and Calibration System (MCWS) allows the visualization of the most recent channel status of each module. DCS (Detector Control System) Web System monitors the operation of modules power supply. After the ATLAS operation has started the number of equipment calibrations increased significantly, which has prompted the development of a system that would display all previous information through a centralized way. The Dashboard allows the collaborator to easily access the latest runs or to search for specific ones. After selecting a run, it is possible to check the status of each barrel module through a schematic figure, to view the 10 latest status of a certain module, and

  3. ATLAS Muon Drift Tube Electronics

    Energy Technology Data Exchange (ETDEWEB)

    Arai, Y [KEK, High Energy Accelerator Research Organisation, Tsukuba (Japan); Ball, B; Chapman, J W; Dai, T; Ferretti, C; Gregory, J [University of Michigan, Department of Physics, Ann Arbor, MI (United States); Beretta, M [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Boterenbrood, H; Jansweijer, P P M [Nikhef National Institute for Subatomic Physics, Amsterdam (Netherlands); Brandenburg, G W; Fries, T; Costa, J Guimaraes da; Harder, S; Huth, J [Harvard University, Laboratory for Particle Physics and Cosmology, Cambridge, MA (United States); Ceradini, F [INFN Roma Tre and Universita Roma Tre, Dipartimento di Fisica, Roma (Italy); Hazen, E [Boston University, Physics Department, Boston, MA (United States); Kirsch, L E [Brandeis University, Department of Physics, Waltham, MA (United States); Koenig, A C [Radboud University Nijmegen/Nikhef, Dept. of Exp. High Energy Physics, Nijmegen (Netherlands); Lanza, A [INFN Pavia, Pavia (Italy); Mikenberg, G [Weizmann Institute of Science, Department of Particle Physics, Rehovot (Israel)], E-mail: brandenburg@physics.harvard.edu (and others)

    2008-09-15

    This paper describes the electronics used for the ATLAS monitored drift tube (MDT) chambers. These chambers are the main component of the precision tracking system in the ATLAS muon spectrometer. The MDT detector system consists of 1,150 chambers containing a total of 354,000 drift tubes. It is capable of measuring the sagitta of muon tracks to an accuracy of 60 {mu}m, which corresponds to a momentum accuracy of about 10% at p{sub T}= 1 TeV. The design and performance of the MDT readout electronics as well as the electronics for controlling, monitoring and powering the detector will be discussed. These electronics have been extensively tested under simulated running conditions and have undergone radiation testing certifying them for more than 10 years of LHC operation. They are now installed on the ATLAS detector and are operating during cosmic ray commissioning runs.

  4. ATLAS Open Data project

    CERN Document Server

    The ATLAS collaboration

    2018-01-01

    The current ATLAS model of Open Access to recorded and simulated data offers the opportunity to access datasets with a focus on education, training and outreach. This mandate supports the creation of platforms, projects, software, and educational products used all over the planet. We describe the overall status of ATLAS Open Data (http://opendata.atlas.cern) activities, from core ATLAS activities and releases to individual and group efforts, as well as educational programs, and final web or software-based (and hard-copy) products that have been produced or are under development. The relatively large number and heterogeneous use cases currently documented is driving an upcoming release of more data and resources for the ATLAS Community and anyone interested to explore the world of experimental particle physics and the computer sciences through data analysis.

  5. Monitoring Activities Review action report for the Environmental Monitoring Program

    International Nuclear Information System (INIS)

    Wilhelmsen, R.N.; Wright, K.C.

    1990-12-01

    To improve program planning and to provide bases for technical improvement of the monitoring program, the EG ampersand G Environmental Monitoring (EM) organization has regularly used the Monitoring Activities Review (MAR) process since 1982. Each MAR is conducted by a committee of individuals selected for their experience in the various types of monitoring performed by the EM organization. An MAR of the Environmental Monitoring Program was conducted in 1988. This action report identifies and discusses the recommendations of this MAR committee. This action report also identifies the actions already taken by the EM Unit in response to these recommendations, as well as the actions and schedules to be taken. 10 refs

  6. The ATLAS hadronic tau trigger

    International Nuclear Information System (INIS)

    Shamim, Mansoora

    2012-01-01

    The extensive tau physics programs of the ATLAS experiment relies heavily on trigger to select hadronic decays of tau lepton. Such a trigger is implemented in ATLAS to efficiently collect signal events, while keeping the rate of multi-jet background within the allowed bandwidth. This contribution summarizes the performance of the ATLAS hadronic tau trigger system during 2011 data taking period and improvements implemented for the 2012 data collection.

  7. Monitoring of computing resource utilization of the ATLAS experiment

    International Nuclear Information System (INIS)

    Rousseau, David; Vukotic, Ilija; Schaffer, RD; Dimitrov, Gancho; Aidel, Osman; Albrand, Solveig

    2012-01-01

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.

  8. Advanced Visualization System for Monitoring the ATLAS TDAQ Network in real-time

    CERN Document Server

    Batraneanu, S M; The ATLAS collaboration; Martin, B; Savu, D O; Stancu, S N; Leahu, L

    2012-01-01

    The trigger and data acquisition (TDAQ) system of the ATLAS experiment at CERN comprises approximately 2500 servers interconnected by three separate Ethernet networks, totaling 250 switches. Due to its real-time nature, there are additional requirements in comparison to conventional networks in terms of speed and performance. A comprehensive monitoring framework has been developed for expert use. However, non experts may experience difficulties in using it and interpreting data. Moreover, specific performance issues, such as single component saturation or unbalanced workload, need to be spotted with ease, in real-time, and understood in the context of the full system view. We addressed these issues by developing an innovative visualization system where the users benefit from the advantages of 3D graphics to visualize the large monitoring parameter space associated with our system. This has been done by developing a hierarchical model of the complete system onto which we overlaid geographical, logical and real...

  9. Conference Report: The First ATLAS.ti User Conference

    Directory of Open Access Journals (Sweden)

    Jeanine C. Evers

    2014-01-01

    Full Text Available This report on the First ATLAS.ti User Conference shares our impressions and experiences as longstanding ATLAS.ti users and trainers about the First ATLAS.ti User Conference in Berlin 2013. The origins, conceptual principles and development of the program are outlined, the conference themes discussed and experiences shared. Finally, the future of the program is discussed. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1401197

  10. Application of rule-based data mining techniques to real time ATLAS Grid job monitoring data

    CERN Document Server

    Ahrens, R; The ATLAS collaboration; Kalinin, S; Maettig, P; Sandhoff, M; dos Santos, T; Volkmer, F

    2012-01-01

    The Job Execution Monitor (JEM) is a job-centric grid job monitoring software developed at the University of Wuppertal and integrated into the pilot-based “PanDA” job brokerage system leveraging physics analysis and Monte Carlo event production for the ATLAS experiment on the Worldwide LHC Computing Grid (WLCG). With JEM, job progress and grid worker node health can be supervised in real time by users, site admins and shift personnel. Imminent error conditions can be detected early and countermeasures can be initiated by the Job’s owner immideatly. Grid site admins can access aggregated data of all monitored jobs to infer the site status and to detect job and Grid worker node misbehaviour. Shifters can use the same aggregated data to quickly react to site error conditions and broken production tasks. In this work, the application of novel data-centric rule based methods and data-mining techniques to the real time monitoring data is discussed. The usage of such automatic inference techniques on monitorin...

  11. Calibration and Monitoring systems of the ATLAS Tile Hadron Calorimeter

    CERN Document Server

    BOUMEDIENE, D; The ATLAS collaboration

    2012-01-01

    The TileCal is the hadronic calorimeter covering the most central region of the ATLAS experiment at LHC. It is a sampling calorimeter with iron plates as absorber and plastic scintillating tiles as the active material. The scintillation light produced by the passage of charged particles is transmitted by wavelength shifting fibers to about 10000 photomultiplier tubes (PMTs). Integrated on the calorimeter there is a composite device that allows to monitor and/or equalize the signals at various stages of its formation. This device is based on signal generation from different sources: radioactive, LASER and charge injection and minimum bias events produces in proton-proton collisions. In this contribution is given a brief description of the different systems hardware and presented the latest results on their performance, like the determination of the conversion factors, linearity and stability.

  12. Glance traceability – Web system for equipment traceability and radiation monitoring for the ATLAS experiment

    CERN Document Server

    Ramos de Azevedo Evora, L H; Pommes, K; Galvão, K K; Maidantchik, C

    2010-01-01

    During the operation, maintenance, and dismantling periods of the ATLAS Experiment, the traceability of all detector equipment must be guaranteed for logistic and safety matters. The running of the Large Hadron Collider will expose the ATLAS detector to radiation. Therefore, CERN must follow specific regulations from both the French and Swiss authorities for equipment removal, transport, repair, and disposal. GLANCE Traceability, implemented in C++ and Java/Java3D, has been developed to fulfill the requirements. The system registers and associates each equipment part to either a functional position in the detector or a zone outside the underground area through a 3D graphical user interface. Radiation control of the equipment is performed using a radiation monitor connected to the system: the local background gets stored and the threshold is automatically calculated. The system classifies the equipment as non radioactive if its radiation dose does not exceed that limit value. History for both location traceabi...

  13. The Monitoring and Calibration Web Systems for the ATLAS Tile Calorimeter Data Quality Analysis

    International Nuclear Information System (INIS)

    Sivolella, A; Maidantchik, C; Ferreira, F

    2012-01-01

    The Tile Calorimeter (TileCal) is one of the ATLAS sub-detectors. The read-out is performed by about 10,000 PhotoMultiplier Tubes (PMTs). The signal of each PMT is digitized by an electronic channel. The Monitoring and Calibration Web System (MCWS) supports the data quality analysis of the electronic channels. This application was developed to assess the detector status and verify its performance. It can provide to the user the list of TileCal known problematic channels, that is stored in the ATLAS condition database (COOL DB). The bad channels list guides the data quality validator in identifying new problematic channels and is used in data reconstruction and the system allows to update the channels list directly in the COOL database. MCWS can generate summary results, such as eta-phi plots and comparative tables of the masked channels percentage. Regularly, during the LHC (Large Hadron Collider) shutdown a maintenance of the detector equipments is performed. When a channel is repaired, its calibration constants stored in the COOL database have to be updated. Additionally MCWS system manages the update of these calibration constants values in the COOL database. The MCWS has been used by the Tile community since 2008, during the commissioning phase, and was upgraded to comply with ATLAS operation specifications. Among its future developments, it is foreseen an integration of MCWS with the TileCal control Web system (DCS) in order to identify high voltage problems automatically.

  14. ATLAS B-physics potential

    CERN Document Server

    Smizanska, M

    2001-01-01

    Studies since 1993 have demonstrated the ability of ATLAS to pursue a wide B physics program. This document presents the latest performance studies with special stress on lepton identification. B-decays containing several leptons in ATLAS statistically dominate the high- precision measurements. We present new results on physics simulations of CP violation measurements in the B/sub s//sup 0/ to J/ psi phi decay and on a novel ATLAS programme on beauty production in central proton-proton collisions at the LHC. (7 refs).

  15. Development of noSQL data storage for the ATLAS PanDA Monitoring System

    International Nuclear Information System (INIS)

    Ito, H; Potekhin, M; Wenaus, T

    2012-01-01

    For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with an R and D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic rate of queries. In conclusion, we present our experience with operating a Cassandra cluster over an extended period of time and with data load adequate for planned application.

  16. Development of noSQL data storage for the ATLAS PanDA Monitoring System

    Science.gov (United States)

    Ito, H.; Potekhin, M.; Wenaus, T.

    2012-12-01

    For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with an R&D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic rate of queries. In conclusion, we present our experience with operating a Cassandra cluster over an extended period of time and with data load adequate for planned application.

  17. Community Radiation Monitoring Program

    International Nuclear Information System (INIS)

    Cooper, E.N.

    1993-05-01

    The Community Radiation Monitoring Program (CRMP) is a cooperative effort between the US Department of Energy (DOE); the US Environmental Protection Agency (EPA); the Desert Research Institute (DRI), a division of the University and Community College System of Nevada and the Nuclear Engineering Laboratory of the University of Utah (UNEL). The twelfth year of the program began in the fall of 1991, and the work continues as an integral part of the DOE-sponsored long-term offsite radiological monitoring effort that has been conducted by EPA and its predecessors since the inception of nuclear testing at the Nevada Test Site (NTS). The program began as an outgrowth of activities that occurred during the Three Mile Island incident in 1979. The local interest and public participation that took place there were thought to be transferrable to the situation at the NTS, so, with adaptations, that methodology was implemented for this program. The CRMP began by enhancing and centralizing environmental monitoring and sampling equipment at 15 communities in the existing EPA monitoring network, and has since expanded to 19 locations in Nevada, Utah and California. The primary objectives of this program are still to increase the understanding by the people who live in the area surrounding the NTS of the activities for which DOE is responsible, to enhance the performance of radiological sampling and monitoring, and to inform all concerned of the results of these efforts. One of the primary methods used to improve the communication link with people in the potentially impacted area has been the hiring and training of local citizens as station managers and program representatives in those selected communities in the offsite area. These managers, active science teachers wherever possible, have succeeded, through their training, experience, community standing, and effort, in becoming a very visible, able and valuable asset in this link

  18. Pantex Plant meteorological monitoring program

    International Nuclear Information System (INIS)

    Snyder, S.F.

    1993-07-01

    The current meteorological monitoring program of the US Department of Energy's Pantex Plant, Amarillo, Texas, is described in detail. Instrumentation, meteorological data collection and management, and program management are reviewed. In addition, primary contacts are noted for instrumentation, calibration, data processing, and alternative databases. The quality assurance steps implemented during each portion of the meteorological monitoring program are also indicated

  19. Data federation strategies for ATLAS using XRootD

    International Nuclear Information System (INIS)

    Gardner, Robert; Vukotic, Ilija; Campana, Simone; Iven, Jan; Duckeck, Guenter; Elmsheuser, Johannes; Hönig, Friedrich G; Legger, Federica; Hanushevsky, Andrew; Yang, Wei

    2014-01-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the wide area network and staging of remote data files to local disk. To support job-brokering decisions, a time-dependent cost-of-data-access matrix is made taking into account network performance and key site performance factors. The system's response to production-scale physics analysis workloads, either from individual end-users or ATLAS analysis services, is discussed.

  20. The monitoring and data quality assessment of the ATLAS liquid argon calorimeter

    International Nuclear Information System (INIS)

    Simard, Olivier

    2015-01-01

    The ATLAS experiment is designed to study the proton-proton (pp) collisions produced at the Large Hadron Collider (LHC) at CERN. Liquid argon (LAr) sampling calorimeters are used for all electromagnetic calorimetry in the pseudo-rapidity region |η| < 3.2, as well as for hadronic calorimetry in the range 1.5 < |η| < 4.9. The electromagnetic calorimeters use lead as passive material and are characterized by an accordion geometry that allows a fast and uniform response without azimuthal gaps. Copper and tungsten were chosen as passive material for the hadronic calorimetry; while a classic parallel-plate geometry was adopted at large polar angles, an innovative design based on cylindrical electrodes with thin liquid argon gaps is employed at low angles, where the particle flux is higher. All detectors are housed in three cryostats maintained at about 88.5 K. The 182,468 cells are read out via front-end boards housed in on-detector crates that also contain monitoring, calibration, trigger and timing boards. In the first three years of LHC operation, approximately 27 fb −1 of pp collision data were collected at centre-of-mass energies of 7-8 TeV. Throughout this period, the calorimeter consistently operated with performances very close to specifications, with high data-taking efficiency. This is in large part due to a sophisticated data monitoring procedure designed to quickly identify issues that would degrade the detector performance, to ensure that only the best quality data are used for physics analysis. After a description of the detector design, main characteristics and operation principles, this paper details the data quality assessment procedures developed during the 2011 and 2012 LHC data-taking periods, when more than 98% of the luminosity recorded by ATLAS had high quality LAr calorimeter data suitable for physics analysis

  1. Report to users of ATLAS, January 1998

    International Nuclear Information System (INIS)

    Ahmad, I.; Hofman, D.

    1998-01-01

    This report is aimed at informing users about the operating schedule, user policies, and recent changes in research capabilities. It covers the following subjects: (1) status of the Argonne Tandem-Linac Accelerator System (ATLAS) accelerator; (2) the move of Gammasphere from LBNL to ANL; (3) commissioning of the CPT mass spectrometer at ATLAS; (4) highlights of recent research at ATLAS; (5) Program Advisory Committee; and (6) ATLAS User Group Executive Committee

  2. Community Radiation Monitoring Program

    International Nuclear Information System (INIS)

    Lucas, R.P. Jr.; Cooper, E.N.; McArthur, R.D.

    1990-05-01

    The Community Radiation Monitoring Program began its ninth year in the summer of 1989, continuing as an essential portion of the Environmental Protection Agency's long-standing off-site monitoring effort. It is a cooperative venture between the Department of Energy (DOE), the Environmental Protection Agency (EPA), the University of Utah (U of U), and the Desert Research Institute (DRI) of the University of Nevada System. The objectives of the program include enhancing and augmenting the collection of environmental radiation data at selected sites around the Nevada Test Site (NTS), increasing public awareness of that effort, and involving, in as many ways as possible, the residents of the off-site area in these and other areas related to testing nuclear weapons. This understanding and improved communication is fostered by hiring residents of the communities where the monitoring stations are located as program representatives, presenting public education forums in those and other communities, disseminating information on radiation monitoring and related subjects, and developing and maintaining contacts with local citizens and elected officials in the off-site areas. 8 refs., 4 figs., 4 tabs

  3. ATLAS DAQ/HLT rack DCS

    International Nuclear Information System (INIS)

    Ermoline, Yuri; Burckhart, Helfried; Francis, David; Wickens, Frederick J.

    2007-01-01

    The ATLAS Detector Control System (DCS) group provides a set of standard tools, used by subsystems to implement their local control systems. The ATLAS Data Acquisition and High Level Trigger (DAQ/HLT) rack DCS provides monitoring of the environmental parameters (air temperatures, humidity, etc.). The DAQ/HLT racks are located in the underground counting room (20 racks) and in the surface building (100 racks). The rack DCS is based on standard ATLAS tools and integrated into overall operation of the experiment. The implementation is based on the commercial control package and additional components, developed by CERN Joint Controls Project Framework. The prototype implementation and measurements are presented

  4. ATLAS note ATL-COM-PHYS-2009

    International Nuclear Information System (INIS)

    Chekanov, S.; Boomsma, J.

    2009-01-01

    The program InvMass has been developed to perform a general model-independent search for new particles using the ATLAS detector at the Large Hadron Collider (LHC), a proton-proton collider at CERN. The search is performed by examining statistically significant variations from the Standard Model predictions in exclusive event classes classified according to the number of identified objects. The program, called InvMass, finds all relevant particle groups identified with the ATLAS detector and analyzes their production rates, invariant masses and the total transverse momenta. The generic code of InvMass can easily be adapted for any particle types identified with the ATLAS detector. Several benchmark tests are presented.

  5. EnviroAtlas - Big Game Hunting Recreation Demand by 12-Digit HUC in the Conterminous United States

    Science.gov (United States)

    This EnviroAtlas dataset includes the total number of recreational days per year demanded by people ages 18 and over for big game hunting by location in the contiguous United States. Big game includes deer, elk, bear, and wild turkey. These values are based on 2010 population distribution, 2011 U.S. Fish and Wildlife Service (FWS) Fish, Hunting, and Wildlife-Associated Recreation (FHWAR) survey data, and 2011 U.S. Department of Agriculture (USDA) Forest Service National Visitor Use Monitoring program data, and have been summarized by 12-digit hydrologic unit code (HUC). This dataset was produced by the US EPA to support research and online mapping activities related to the EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).

  6. System administration of ATLAS TDAQ computing environment

    Science.gov (United States)

    Adeel-Ur-Rehman, A.; Bujor, F.; Benes, J.; Caramarcu, C.; Dobson, M.; Dumitrescu, A.; Dumitru, I.; Leahu, M.; Valsan, L.; Oreshkin, A.; Popov, D.; Unel, G.; Zaytsev, A.

    2010-04-01

    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating on LHC collider at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, about 40 multi-screen user interface machines installed in the control rooms and various hardware and service monitoring machines as well. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The software distribution requirements are matched by the two level NFS based solution. Hardware and network monitoring systems of ATLAS TDAQ are based on NAGIOS and MySQL cluster behind it for accounting and storing the monitoring data collected, IPMI tools, CERN LANDB and the dedicated tools developed by the group, e.g. ConfdbUI. The user management schema deployed in TDAQ environment is founded on the authentication and role management system based on LDAP. External access to the ATLAS online computing facilities is provided by means of the gateways supplied with an accounting system as well. Current activities of the group include deployment of the centralized storage system, testing and validating hardware solutions for future use within the ATLAS TDAQ environment including new multi-core blade servers, developing GUI tools for user authentication and roles management, testing and validating 64-bit OS, and upgrading the existing TDAQ hardware components, authentication servers and the gateways.

  7. Development of Beam Conditions Monitor for the ATLAS experiment

    CERN Document Server

    Dolenc Kittelmann, Irena; Mikuž, M

    2008-01-01

    If there is a failure in an element of the accelerator the resulting beam losses could cause damage to the inner tracking devices of the experiments. This thesis presents the work performed during the development phase of a protection system for the ATLAS experiment at the LHC. The Beam Conditions Monitor (BCM) system is a stand-alone system designed to detect early signs of beam instabilities and trigger a beam abort in case of beam failures. It consists of two detector stations positioned at z=±1.84m from the interaction point. Each station comprises four BCM detector modules installed symmetrically around the beam pipe with sensors located at r=55 mm. This structure will allow distinguishing between anomalous events (beam gas and beam halo interactions, beam instabilities) and normal events due to proton-proton interaction by measuring the time-of-flight as well as the signal pulse amplitude from detector modules on the timescale of nanoseconds. Additionally, the BCM system aims to provide a coarse instan...

  8. Preparation of Northern Mid-Continent Petroleum Atlas

    Energy Technology Data Exchange (ETDEWEB)

    Lee C. Gerhard; Timothy R. Carr; W. Lynn Watney

    1998-05-01

    As proposed, the third year program will continue and expand upon the Kansas elements of the original program, and provide improved on-line access to the prototype atlas. The third year of the program will result in a digital atlas sufficient to provide a permanent improvement in data access to Kansas operators. The ultimate goal of providing an interactive history-matching interface with a regional database will be demonstrated as the program covers more geographic territory and the database expands. The atlas will expand to include significant reservoirs representing the major plays in Kansas, and North Dakota. Primary products of the third year prototype atlas will be on-line accessible digital databases and technical publications covering two additional petroleum plays in Kansas and one in North Dakota. Regional databases will be supplemented with geological field studies of selected fields in each play. Digital imagery, digital mapping, relational data queries, and geographical information systems will be integral to the field studies and regional data sets. Data sets will have relational links to provide opportunity for history-matching, feasibility, and risk analysis tests on contemplated exploration and development projects. The flexible "web-like" design of the atlas provides ready access to data, and technology at a variety of scales from regional, to field, to lease, and finally to the individual well bore. The digital structure of the atlas permits the operator to access comprehensive reservoir data and customize the interpretative products (e.g., maps and cross-sections) to their needs. The atlas will be accessible in digital form on-line using a World-Wide-Web browser as the graphical user interface. Regional data sets and field studies will be freestanding entities that will be made available on-line through the Internet to users as they are completed. Technology transfer activities will be ongoing from the earliest part of this project, providing

  9. Post decommissioning monitoring of uranium mines; a watershed monitoring program based on biological response

    International Nuclear Information System (INIS)

    Russel, C.; Coggan, A.; Ludgate, I.

    2006-01-01

    Rio Algom Limited and Denison Mines own and operated uranium mines in the Elliot Lake area. The mines operated from the late 1950's to the mid 1960's and again for the early 1970's to the 1990's when the mines ceased operations. There are eleven decommissioned mines in the Serpent River watershed. At the time of decommissioning each mine had it's own monitoring program, which had evolved over the operating life of the mine and did not necessarily reflect the objectives associated with the monitoring of decommissioned sites. In order to assess the effectiveness of the decommissioning plans and monitoring the cumulative effects within the watershed, a single watershed monitoring program was developed in 1999: the Serpent River Watershed Monitoring Program which focused on water and sediment quality within the watershed and response of the biological community over time. In order to address other 'source area' monitoring, three complimentary objective-focused programs were developed 1) the In- Basin Monitoring Program, 2) the Source Area Monitoring Program and 3) the TMA Operational Monitoring Program. Through development this program framework and monitoring programs that were objective- focused, more meaningful data has been provided while providing a significant reduction in the cost of monitoring. These programs allow for the reduction in scope over time in response to improvement in the watershed. This talk will describe the development of these programs, their implementation and effectiveness. (author)

  10. Soft real-time alarm messages for ATLAS TDAQ

    CERN Document Server

    Darlea, G; Martin, B; Lehmann Miotto, G

    2010-01-01

    The ATLAS TDAQ network consists of three separate Ethernet-based networks (Data, Control and Management) with over 2000 end-nodes. The TDAQ system has to be aware of the meaningful network failures and events in order for it to take effective recovery actions. The first stage of the process is implemented with Spectrum, a commercial network management tool. Spectrum detects and registers all network events, then it publishes the information via a CORBA programming interface. A gateway program (called NSG—Network Service Gateway) connects to Spectrum through CORBA and exposes to its clients a Java RMI interface. This interface implements a callback mechanism that allows the clients to subscribe for monitoring "interesting" parts of the network. The last stage of the TDAQ network monitoring tool is implemented in a module named DNC (DAQ to Network Connection), which filters the events that are to be reported to the TDAQ system: it subscribes to the gateway only for the machines that are currently active in th...

  11. Streamlined calibrations of the ATLAS precision muon chambers for initial LHC running

    Energy Technology Data Exchange (ETDEWEB)

    Amram, N. [Raymond and Beverly Sackler School of Physics and Astronomy, Tel-Aviv University, 69978 Tel Aviv (Israel); Ball, R. [Department of Physics, The University of Michigan, Ann Arbor, MI 48109-1120 (United States); Benhammou, Y.; Ben Moshe, M. [Raymond and Beverly Sackler School of Physics and Astronomy, Tel-Aviv University, 69978 Tel Aviv (Israel); Dai, T.; Diehl, E.B. [Department of Physics, The University of Michigan, Ann Arbor, MI 48109-1120 (United States); Dubbert, J. [Max-Planck-Institut fuer Physik, Werner-Heisenberg-Institut, Muenchen (Germany); Etzion, E., E-mail: erez@cern.ch [Raymond and Beverly Sackler School of Physics and Astronomy, Tel-Aviv University, 69978 Tel Aviv (Israel); Ferretti, C.; Gregory, J. [Department of Physics, The University of Michigan, Ann Arbor, MI 48109-1120 (United States); Haider, S. [CERN, CH-1211 Geneva 23 (Switzerland); Hindes, J.; Levin, D.S.; Manilow, E.; Thun, R.; Wilson, A.; Weaverdyck, C.; Wu, Y.; Yang, H.; Zhou, B. [Department of Physics, The University of Michigan, Ann Arbor, MI 48109-1120 (United States); and others

    2012-04-11

    The ATLAS Muon Spectrometer is designed to measure the momentum of muons with a resolution of dp/p=3% at 100 GeV and 10% at 1 TeV. For this task, the spectrometer employs 355,000 Monitored Drift Tubes (MDTs) arrayed in 1200 chambers. Calibration (RT) functions convert drift time measurements into tube-centered impact parameters for track segment reconstruction. RT functions depend on MDT environmental parameters and so must be appropriately calibrated for local chamber conditions. We report on the creation and application of a gas monitor system based calibration program for muon track reconstruction in the LHC startup phase.

  12. Automating usability of ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Tupputi, S A; Girolamo, A Di; Kouba, T; Schovancová, J

    2014-01-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  13. ATLAS operations in the GridKa T1/T2 Cloud

    International Nuclear Information System (INIS)

    Duckeck, G; Serfon, C; Walker, R; Harenberg, T; Kalinin, S; Schultes, J; Kawamura, G; Leffhalm, K; Meyer, J; Nderitu, S; Olszewski, A; Petzold, A; Sundermann, J E

    2011-01-01

    The ATLAS GridKa cloud consists of the GridKa Tier1 centre and 12 Tier2 sites from five countries associated to it. Over the last years a well defined and tested operation model evolved. Several core cloud services need to be operated and closely monitored: distributed data management, involving data replication, deletion and consistency checks; support for ATLAS production activities, which includes Monte Carlo simulation, reprocessing and pilot factory operation; continuous checks of data availability and performance for user analysis; software installation and database setup. Of crucial importance is good communication between sites, operations team and ATLAS as well as efficient cloud level monitoring tools. The paper gives an overview of the operations model and ATLAS services within the cloud.

  14. An automated meta-monitoring mobile application and front-end interface for the ATLAS computing model

    Energy Technology Data Exchange (ETDEWEB)

    Kawamura, Gen; Quadt, Arnulf [II. Physikalisches Institut, Georg-August-Universitaet Goettingen (Germany)

    2016-07-01

    Efficient administration of computing centres requires advanced tools for the monitoring and front-end interface of the infrastructure. Providing the large-scale distributed systems as a global grid infrastructure, like the Worldwide LHC Computing Grid (WLCG) and ATLAS computing, is offering many existing web pages and information sources indicating the status of the services, systems and user jobs at grid sites. A meta-monitoring mobile application which automatically collects the information could give every administrator a sophisticated and flexible interface of the infrastructure. We describe such a solution; the MadFace mobile application developed at Goettingen. It is a HappyFace compatible mobile application which has a user-friendly interface. It also becomes very feasible to automatically investigate the status and problem from different sources and provides access of the administration roles for non-experts.

  15. Monitoring the US ATLAS Network Infrastructure with perfSONAR-PS

    CERN Document Server

    McKee, S; The ATLAS collaboration; Laurens, P; Severini, H; Wlodek, T; Wolff, S; Zurawski, J

    2012-01-01

    We will present our motivations for deploying and using the perfSONAR-PS Performance Toolkit at ATLAS sites in the United States and describe our experience in using it. This software creates a dedicated monitoring server, capable of collecting and performing a wide range of passive and active network measurements. Each independent instance is managed locally, but able to federate on a global scale; enabling a full view of the network infrastructure that spans domain boundaries. This information, available through web service interfaces, can easily be retrieved to create customized applications. USATLAS has developed a centralized “dashboard” offering network administrators, users, and decision makers the ability to see the performance of the network at a glance. The dashboard framework includes the ability to notify users (alarm) when problems are found, thus allowing rapid response to potential problems and making perfSONAR-PS crucial to the operation of our distributed computing infrastructure.

  16. Ecological Monitoring and Compliance Program 2007 Report

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Dennis; Anderson, David; Derek, Hall; Greger, Paul; Ostler, W. Kent

    2008-03-01

    In accordance with U.S. Department of Energy (DOE) Order 450.1, 'Environmental Protection Program', the Office of the Assistant Manager for Environmental Management of the DOE, National Nuclear Security Administration Nevada Site Office (NNSA/NSO) requires ecological monitoring and biological compliance support for activities and programs conducted at the Nevada Test Site (NTS). National Security Technologies, LLC (NSTec), Ecological Services has implemented the Ecological Monitoring and Compliance (EMAC) Program to provide this support. EMAC is designed to ensure compliance with applicable laws and regulations, delineate and define NTS ecosystems, and provide ecological information that can be used to predict and evaluate the potential impacts of proposed projects and programs on those ecosystems. This report summarizes the EMAC activities conducted by NSTec during calendar year 2007. Monitoring tasks during 2007 included eight program areas: (a) biological surveys, (b) desert tortoise compliance, (c) ecosystem mapping and data management, (d) sensitive plant monitoring, (e) sensitive and protected/regulated animal monitoring, (f) habitat monitoring, (g) habitat restoration monitoring, and (h) biological monitoring at the Nonproliferation Test and Evaluation Complex (NPTEC). The following sections of this report describe work performed under these eight areas.

  17. ATLAS DataFlow Infrastructure recent results from ATLAS cosmic and first-beam data-taking

    CERN Document Server

    Vandelli, W

    2010-01-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented testbed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its fle...

  18. Steps in formulating an environmental monitoring program

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This section describes the process of establishing a complete equipment environmental monitoring program; the step by step process is also illustrated in Table 3 of the Summary. The following decisions must be made in defining the program: an initial characterization of plant environment, how to integrate with existing programs to realize the maximum benefits, identification of the specific monitoring locations, determining the monitoring techniques, frequency of recording data, monitoring duration, quality assurance requirements, and finally, establishing the recordkeeping requirements

  19. Monitoring Completed Navigation Projects Program

    National Research Council Canada - National Science Library

    Bottin, Jr., Robert R

    2001-01-01

    ... (MCNP) Program. The program was formerly known as the Monitoring Completed Coastal Projects Program, but was modified in the late 1990s to include all navigation projects, inland as well as coastal...

  20. ATLAS production system

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Golubkov, Dmitry; Maeno, Tadashi; Mashinistov, Ruslan; Wenaus, Torre; Padolski, Siarhei

    2016-01-01

    The second generation of the ATLAS production system called ProdSys2 is a distributed workload manager which used by thousands of physicists to analyze the data remotely, with the volume of processed data is beyond the exabyte scale, across a more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criterias, such as input and output size, memory requirements and CPU consumption with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteering computers. Besides jobs definition Production System also includes flexible web user interface, which implements user-friendly environment for main ATLAS workflows, e.g. simple way of combining different data flows, and real-time monitoring, optimised for using with huge amount of information to present. We present an overview of the ATLAS Production System major components: job and task definition, workflow manager web user i...

  1. ATLAS rewards industry

    CERN Multimedia

    2006-01-01

    Showing excellence in mechanics, electronics and cryogenics, three industries are honoured for their contributions to the ATLAS experiment. Representatives of the three award-wining companies after the ceremony. For contributing vital pieces to the ATLAS puzzle, three industries were recognized on Friday 5 May during a supplier awards ceremony. After a welcome and overview of the ATLAS experiment by spokesperson Peter Jenni, CERN Secretary-General Maximilian Metzger stressed the importance of industry to CERN's scientific goals. Close interaction with CERN was a key factor in the selection of each rewarded company, in addition to the high-quality products they delivered to the experiment. Alu Menziken Industrie AG, of Switzerland, was honoured for the production of 380,000 aluminium tubes for the Monitored Drift Tube Chambers (MDT). As Giora Mikenberg, the Muon System Project Leader stressed, the aluminium tubes were delivered on time with an extraordinary quality and precision. Between October 2000 and Jan...

  2. ATLAS solenoid operates underground

    CERN Multimedia

    2006-01-01

    A new phase for the ATLAS collaboration started with the first operation of a completed sub-system: the Central Solenoid. Teams monitoring the cooling and powering of the ATLAS solenoid in the control room. The solenoid was cooled down to 4.5 K from 17 to 23 May. The first current was established the same evening that the solenoid became cold and superconductive. 'This makes the ATLAS Central Solenoid the very first cold and superconducting magnet to be operated in the LHC underground areas!', said Takahiko Kondo, professor at KEK. Though the current was limited to 1 kA, the cool-down and powering of the solenoid was a major milestone for all of the control, cryogenic, power and vacuum systems-a milestone reached by the hard work and many long evenings invested by various teams from ATLAS, all of CERN's departments and several large and small companies. Since the Central Solenoid and the barrel liquid argon (LAr) calorimeter share the same cryostat vacuum vessel, this achievement was only possible in perfe...

  3. Ecological Monitoring and Compliance Program 2011 Report

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, D. J.; Anderson, D. C.; Hall, D. B.; Greger, P. D.; Ostler, W. K.

    2012-06-13

    The Ecological Monitoring and Compliance (EMAC) Program, funded through the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office, monitors the ecosystem of the Nevada National Security Site and ensures compliance with laws and regulations pertaining to NNSS biota. This report summarizes the program's activities conducted by National Security Technologies, LLC, during calendar year 2011. Program activities included (a) biological surveys at proposed construction sites, (b) desert tortoise compliance, (c) ecosystem monitoring, (d) sensitive plant species monitoring, (e) sensitive and protected/regulated animal monitoring, (f) habitat restoration monitoring, and (g) monitoring of the Nonproliferation Test and Evaluation Complex. During 2011, all applicable laws, regulations, and permit requirements were met, enabling EMAC to achieve its intended goals and objectives.

  4. Consolidation of cloud computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Cordeiro, Cristovao; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall; Giordano, Domenico

    2017-01-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in resp...

  5. Consolidation of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Cordeiro, Cristovao; Di Girolamo, Alessandro; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall

    2016-01-01

    Throughout the first year of LHC Run 2, ATLAS Cloud Computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS Cloud Computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vac resources, streamlined usage of the High Level Trigger cloud for simulation and reconstruction, extreme scaling on Amazon EC2, and procurement of commercial cloud capacity in Europe. Building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems. ...

  6. ATLAS DataFlow Infrastructure: Recent results from ATLAS cosmic and first-beam data-taking

    Energy Technology Data Exchange (ETDEWEB)

    Vandelli, Wainer, E-mail: wainer.vandelli@cern.c

    2010-04-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented test-bed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its flexibility and contributed in understanding its limitations. Moreover, the integration with the detector and the interfacing with the off-line data processing and management have been able to take advantage of this extended data taking-period as well. In this paper we report on the usage of the DataFlow infrastructure during the ATLAS data-taking. These results, backed-up by complementary performance tests, validate the architecture of the ATLAS DataFlow and prove that the system is robust, flexible and scalable enough to cope with the final requirements of the ATLAS experiment.

  7. ATLAS DataFlow Infrastructure: Recent results from ATLAS cosmic and first-beam data-taking

    International Nuclear Information System (INIS)

    Vandelli, Wainer

    2010-01-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented test-bed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its flexibility and contributed in understanding its limitations. Moreover, the integration with the detector and the interfacing with the off-line data processing and management have been able to take advantage of this extended data taking-period as well. In this paper we report on the usage of the DataFlow infrastructure during the ATLAS data-taking. These results, backed-up by complementary performance tests, validate the architecture of the ATLAS DataFlow and prove that the system is robust, flexible and scalable enough to cope with the final requirements of the ATLAS experiment.

  8. Report to users of ATLAS, December 1995

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1995-12-01

    This report covers the following: status of ATLAS accelerator; highlights of recent research at ATLAS; research related concept for an Advanced Exotic Beam Facility on ATLAS; program advisory committee; and ATLAS user group executive committee. Research highlights are given for the following: APEX progress report; transport efficiency of the Argonne Fragment Mass Analyzer; collective motion in light polonium isotopes; angular correlation measurements for 12 C(g.s.) + 12 C(3-,9.64MeV) inelastic scattering; and the AYE-ball (Argonne-Yale-European gamma spectrometer) used to study the structure of nuclei far from stability

  9. The forward Detectors of the ATLAS experiment

    CERN Document Server

    Vittori, Camilla; The ATLAS collaboration

    2017-01-01

    In this poster, a review of the ATLAS forward detectors operating in the 2015-2016 data taking is given. This includes a description of LUCID, the preferred ATLAS luminosity provider; of the ALFA detector, aimed to measure elastically scattered protons at small angle for the total proton-proton cross section measurement; of the ATLAS Forward Proton project AFP, which was partially installed and took the first data in 2015, and of the Zero Degree Calorimeter ZDC built for the ATLAS Heavy Ions physics program. The near future plans for these detectors will also be addressed.

  10. Terra Nova Environmental effects monitoring program

    International Nuclear Information System (INIS)

    Williams, U.; Murdoch, M.

    2000-01-01

    Elements of the environmental effects monitoring program in the Terra Nova oil field, about 350 km east-southeast of St. John's, Newfoundland, are described. This oilfield is being developed using a floating production storage and offloading (FPSO) facility. A total of 24 wells are expected to be drilled through seven subsea templates located in four glory holes to protect them from icebergs. Subsea installations will be linked to the FPSO by trenched flowlines connected to flexible risers. The FPSO will offload to shuttle tankers. First oil is expected in 2001. The environmental effects monitoring program will be conducted annually for the first two years beginning in 2000. Subsequent scheduling will be determined after a review of monitoring data collected during the first three years. Input to the design of the monitoring program was provided by all stakeholders, i. e. owners, local public, government agencies and regional and international experts. A model was developed linking project discharges and possible effects to the environment, including marine resources in the area, and the information derived from these activities was used to generate a set of predictions and hypotheses to be tested in the monitoring program. The monitoring program will use two spatial models: a regression or gradient design and a control-impact design. The gradient design will monitor water column and sediment chemistry, sediment toxicity and benthic invertebrate communities. The control-impact design will be used to monitor larger and more mobile fish or shellfish. The evaluated results will serve as the basis for determining impact predictions and to provide information to allow for decisions pertaining to the protection of the marine environment

  11. Non-collision backgrounds in ATLAS

    CERN Document Server

    Gibson, S M; The ATLAS collaboration

    2012-01-01

    The proton-proton collision events recorded by the ATLAS experiment are on top of a background that is due to both collision debris and non-collision components. The latter comprises of three types: beam-induced backgrounds, cosmic particles and detector noise. We present studies that focus on the first two of these. We give a detailed description of beam-related and cosmic backgrounds based on the full 2011 ATLAS data set, and present their rates throughout the whole data-taking period. Studies of correlations between tertiary proton halo and muon backgrounds, as well as, residual pressure and resulting beam-gas events seen in beam-condition monitors will be presented. Results of simulations based on the LHC geometry and its parameters will be presented. They help to better understand the features of beam-induced backgrounds in each ATLAS sub-detector. The studies of beam-induced backgrounds in ATLAS reveal their characteristics and serve as a basis for designing rejection tools that can be applied in physic...

  12. Rucio WebUI - The Web Interface for the ATLAS Distributed Data Management

    CERN Document Server

    Beermann, Thomas; The ATLAS collaboration; Barisits, Martin-Stefan; Serfon, Cedric; Garonne, Vincent

    2016-01-01

    With the current distributed data management system for ATLAS, called Rucio, all user interactions, e.g. the Rucio command line tools or the ATLAS workload management system, communicate with Rucio through the same REST-API. This common interface makes it possible to interact with Rucio using a lot of different programming languages, including Javascript. Using common web application frameworks like JQuery and web.py, a web application for Rucio was built. The main component is R2D2 - the Rucio Rule Definition Droid - which gives the users a simple way to manage their data on the grid. They can search for particular datasets and get details about its metadata and available replicas and easily create rules to create new replicas and delete them if not needed anymore. On the other hand it is possible for site admins to restrict transfers to their site by setting quotas and manually approve transfers. Besides R2D2 additional features include transfer backlog monitoring for shifters, group space monitoring for gr...

  13. Ground-Water Protection and Monitoring Program

    Energy Technology Data Exchange (ETDEWEB)

    Dresel, P.E.

    1995-06-01

    This section of the 1994 Hanford Site Environmental Report summarizes the ground-water protection and monitoring program strategy for the Hanford Site in 1994. Two of the key elements of this strategy are to (1) protect the unconfined aquifer from further contamination, and (2) conduct a monitoring program to provide early warning when contamination of ground water does occur. The monitoring program at Hanford is designed to document the distribution and movement of existing ground-water contamination and provides a historical baseline for evaluating current and future risk from exposure to the contamination and for deciding on remedial action options.

  14. Ground-Water Protection and Monitoring Program

    International Nuclear Information System (INIS)

    Dresel, P.E.

    1995-01-01

    This section of the 1994 Hanford Site Environmental Report summarizes the ground-water protection and monitoring program strategy for the Hanford Site in 1994. Two of the key elements of this strategy are to (1) protect the unconfined aquifer from further contamination, and (2) conduct a monitoring program to provide early warning when contamination of ground water does occur. The monitoring program at Hanford is designed to document the distribution and movement of existing ground-water contamination and provides a historical baseline for evaluating current and future risk from exposure to the contamination and for deciding on remedial action options

  15. Monitoring and controlling ATLAS data management: The Rucio web user interface

    Science.gov (United States)

    Lassnig, M.; Beermann, T.; Vigne, R.; Barisits, M.; Garonne, V.; Serfon, C.

    2015-12-01

    The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new data management system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component, and allows easy integration for usergenerated views. The interface follows three design principles. First, the collection and storage of data from internal and external systems is asynchronous to reduce latency. This includes the use of technologies like ActiveMQ or Nagios. Second, analysis of the data into information is done massively parallel due to its volume, using a combined approach with an Oracle database and Hadoop MapReduce. Third, sharing of the information does not distinguish between human or programmatic access, making it easy to access selective parts of the information both in constrained frontends like web-browsers as well as remote services. This contribution will detail the reasons for these principles and the design choices taken. Additionally, the implementation, the interactions with external systems, and an evaluation of the system in production, both from a technological and user perspective, conclude this contribution.

  16. The Read-Out Driver for the ATLAS MDT Muon Precision Chambers

    CERN Document Server

    Boterenbrood, H; Kieft, G; König, A; Vermeulen, J C; Wijnen, T A M; 14th IEEE - NPSS Real Time Conference 2005 Nuclear Plasma Sciences Society

    2006-01-01

    Some 200 MDT Read Out Drivers (MRODs) will be built to read out the 1200 MDT precision chambers of the muon spectrometer of the ATLAS experiment at the LHC. The MRODs receive event data via optical links (one per chamber, up to 8 per MROD), build event fragments at a maximum rate of 100 kHz, output these to the ATLAS data-acquisition system and take care of monitoring and error checking, handling and flagging. The design of the MROD-1 prototype (a 9U VME64 module in which this functionality is implemented using FPGAs and ADSP-21160 Digital Signal Processors programmed in C++) is described, followed by a presentation of results of performance measurements. Then the implications for the production version (called MROD-X) and the experience with pre-production modules of the MROD-X are discussed.

  17. Recent Results from the ATLAS UPC Program

    CERN Document Server

    Cole, Brian; The ATLAS collaboration

    2018-01-01

    Recent results from ATLAS measurements of ultra-peripheral Pb+Pb collisions are presented. Measurements include gamma+gamma -> dimuon, photo-nuclear production of di/multi-jets, and light-by-light scattering.

  18. Consolidation of cloud computing in ATLAS

    Science.gov (United States)

    Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration

    2017-10-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.

  19. Streamlined Calibrations of the ATLAS Precision Muon Chambers for Initial LHC Running

    CERN Document Server

    Amram, N; Benhammou, Y; Moshe, M Ben; Dai, T; Diehl, E B; Dubbert, J; Etzion, E; Ferretti, C; Gregory, J; Haider, S; Hindes, J; Levin, D S; Thun, R; Wilson, A; Weaverdyck, C; Wu, Y; Yang, H; Zhou, B; Zimmermann, S

    2012-01-01

    The ATLAS Muon Spectrometer is designed to measure the momentum of muons with a resolution of dp/p = 3% and 10% at 100 GeV and 1 TeV momentum respectively. For this task, the spectrometer employs 355,000 Monitored Drift Tubes (MDTs) arrayed in 1200 Chambers. Calibration (RT) functions convert drift time measurements into tube-centered impact parameters for track segment reconstruction. RT functions depend on MDT environmental parameters and so must be appropriately calibrated for local chamber conditions. We report on the creation and application of a gas monitor system based calibration program for muon track reconstruction in the LHC startup phase.

  20. ATLAS Trigger Monitoring and Operation in Proton Proton Collisions at 900 GeV

    CERN Document Server

    zur Nedden, M; The ATLAS collaboration

    2010-01-01

    The trigger of the ATLAS-experiment is build as a three level system. The first level is realized in hardware while the higher levels (HLT) are pure software implemented triggers based on large PC farms. According to the LHC bunch crossing frequency of 40 MHz and the expectation of up to 23 interactions per bunch crossing at design luminosity, the trigger system must be able to deal with an input rate of 1 GHz whereas the maximum storage rate is 200 Hz. This complex data acquisition and trigger system requires a reliable and redundant diagnostic and monitoring system. This is inevitable for a successful commissioning and stable running of the whole experiment. The main aspects of trigger monitoring are the rate measurements at each step of the trigger decision at each level, the determination of the quality of the physics objects candidates to be selected at trigger level (as candidates for electrons, muons, taus, gammas, jets, b-jets and missing energy) and the supervision of the system's behavior during the...

  1. Taus at ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Demers, Sarah M. [Yale Univ., New Haven, CT (United States). Dept. of Physics

    2017-12-06

    The grant "Taus at ATLAS" supported the group of Sarah Demers at Yale University over a period of 8.5 months, bridging the time between her Early Career Award and her inclusion on Yale's grant cycle within the Department of Energy's Office of Science. The work supported the functioning of the ATLAS Experiment at CERN's Large Hadron Collider and the analysis of ATLAS data. The work included searching for the Higgs Boson in a particular mode of its production (with a W or Z boson) and decay (to a pair of tau leptons.) This was part of a broad program of characterizing the Higgs boson as we try to understand this recently discovered particle, and whether or not it matches our expectations within the current standard model of particle physics. In addition, group members worked with simulation to understand the physics reach of planned upgrades to the ATLAS experiment. Supported group members include postdoctoral researcher Lotte Thomsen and graduate student Mariel Pettee.

  2. The ATLAS detector simulation application

    International Nuclear Information System (INIS)

    Rimoldi, A.

    2007-01-01

    The simulation program for the ATLAS experiment at CERN is currently in a full operational mode and integrated into the ATLAS common analysis framework, Athena. The OO approach, based on GEANT4, has been interfaced within Athena and to GEANT4 using the LCG dictionaries and Python scripting. The robustness of the application was proved during the test productions since 2004. The Python interface has added the flexibility, modularity and interactivity that the simulation tool requires in order to be able to provide a common implementation of different full ATLAS simulation setups, test beams and cosmic ray applications. Generation, simulation and digitization steps were exercised for performance and robustness tests. The comparison with real data has been possible in the context of the ATLAS Combined Test Beam (2004-2005) and cosmic ray studies (2006)

  3. The Error Reporting in the ATLAS TDAQ System

    Science.gov (United States)

    Kolos, Serguei; Kazarov, Andrei; Papaevgeniou, Lykourgos

    2015-05-01

    The ATLAS Error Reporting provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service (ERS), gives to software applications the opportunity to collect and send comprehensive data about run-time errors, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the ERS as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When an application sends information to ERS, depending on the configuration, it may end up in a local file, a database, distributed middleware which can transport it to an expert system or display it to users. Thanks to the open framework design of ERS, new information destinations can be added at any moment without touching the reporting and receiving applications. The ERS Application Program Interface (API) is provided in three programming languages used in the ATLAS online environment: C++, Java and Python. All APIs use exceptions for error reporting but each of them exploits advanced features of a given language to simplify the end-user program writing. For example, as C++ lacks language support for exceptions, a number of macros have been designed to generate hierarchies of C++ exception classes at compile time. Using this approach a software developer can write a single line of code to generate a boilerplate code for a fully qualified C++ exception class declaration with arbitrary number of parameters and multiple constructors, which encapsulates all relevant static information about the given type of issues. When a corresponding error occurs at run time, the program just need to create an instance of that class passing relevant values to one

  4. The ATLAS online High Level Trigger framework experience reusing offline software components in the ATLAS trigger

    CERN Document Server

    Wiedenmann, W

    2009-01-01

    Event selection in the Atlas High Level Trigger is accomplished to a large extent by reusing software components and event selection algorithms developed and tested in an offline environment. Many of these offline software modules are not specifically designed to run in a heavily multi-threaded online data flow environment. The Atlas High Level Trigger (HLT) framework based on the Gaudi and Atlas Athena frameworks, forms the interface layer, which allows the execution of the HLT selection and monitoring code within the online run control and data flow software. While such an approach provides a unified environment for trigger event selection across all of Atlas, it also poses strict requirements on the reused software components in terms of performance, memory usage and stability. Experience of running the HLT selection software in the different environments and especially on large multi-node trigger farms has been gained in several commissioning periods using preloaded Monte Carlo events, in data taking peri...

  5. Control Logic for the Interlock system of the ATLAS Insertable B-Layer

    CERN Document Server

    Riegel, Christian

    Part of the first upgrade program of the ATLAS detector is the installation of the Insertable B-Layer (IBL) as a fourth and innermost detector layer of the ATLAS pixel detector to prepare the tracking system for the expected increase of pile-up events. As with every sub-detector, the IBL and its components have to be monitored and controlled via a Detector Control System (DCS). A hardware-based interlock system is installed on-site to prevent detector and people working at the detector from serious harm and damage. For the IBL, the logical processing of interlock signals is realised in Interlock Matrix Crates (IMCs) using Complex Programmable Logic Devices (CPLD). One part of this master thesis is the automatic implementation of the logical assignments from database information. A script was developed to generate the needed file to program the CPLD. The second part of this thesis is the design of a test setup to verify the functionality of the electrical components of each IMC and to confirm the correct proce...

  6. Ecological Monitoring and Compliance Program 2008 Report

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Dennis J.; Anderson, David C.; Hall, Derek B.; Greger, Paul D.; Ostler, W. Kent

    2009-04-30

    The Ecological Monitoring and Compliance Program, funded through the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office (NNSA/NSO), monitors the ecosystem of the Nevada Test Site (NTS) and ensures compliance with laws and regulations pertaining to NTS biota. This report summarizes the program’s activities conducted by National Security Technologies, LLC (NSTec), during calendar year 2008. Program activities included (a) biological surveys at proposed construction sites, (b) desert tortoise compliance, (c) ecosystem mapping and data management, (d) sensitive plant species monitoring, (e) sensitive and protected/regulated animal monitoring, (f) habitat monitoring, (g) habitat restoration monitoring, and (h) monitoring of the Nonproliferation Test and Evaluation Complex (NPTEC).

  7. ATLAS EventIndex General Dataflow and Monitoring Infrastructure

    CERN Document Server

    Barberis, Dario; The ATLAS collaboration

    2016-01-01

    The ATLAS EventIndex has been running in production since mid-2015, reliably collecting information worldwide about all produced events and storing them in a central Hadoop infrastructure at CERN. A subset of this information is copied to an Oracle relational database for fast datasets discovery, event-picking, crosschecks with other ATLAS systems and checks for event duplication. The system design and its optimization is serving event picking from requests of a few events up to scales of tens of thousand of events, and in addition, data consistency checks are performed for large production campaigns. Detecting duplicate events with a scope of physics collections has recently arisen as an important use case. This paper describes the general architecture of the project and the data flow and operation issues, which are addressed by recent developments to improve the throughput of the overall system. In this direction, the data collection system is reducing the usage of the messaging infrastructure to overcome t...

  8. ATLAS EventIndex general dataflow and monitoring infrastructure

    CERN Document Server

    AUTHOR|(SzGeCERN)638886; The ATLAS collaboration; Barberis, Dario; Favareto, Andrea; Garcia Montoro, Carlos; Gonzalez de la Hoz, Santiago; Hrivnac, Julius; Prokoshin, Fedor; Salt, Jose; Sanchez, Javier; Toebbicke, Rainer; Yuan, Ruijun

    2017-01-01

    The ATLAS EventIndex has been running in production since mid-2015, reliably collecting information worldwide about all produced events and storing them in a central Hadoop infrastructure at CERN. A subset of this information is copied to an Oracle relational database for fast dataset discovery, event-picking, crosschecks with other ATLAS systems and checks for event duplication. The system design and its optimization is serving event picking from requests of a few events up to scales of tens of thousand of events, and in addition, data consistency checks are performed for large production campaigns. Detecting duplicate events with a scope of physics collections has recently arisen as an important use case. This paper describes the general architecture of the project and the data flow and operation issues, which are addressed by recent developments to improve the throughput of the overall system. In this direction, the data collection system is reducing the usage of the messaging infrastructure to overcome th...

  9. Advances in service and operations for ATLAS data management

    International Nuclear Information System (INIS)

    Stewart, Graeme A; Garonne, Vincent; Lassnig, Mario; Molfetas, Angelos; Barisits, Martin; Calvet, Ivan; Beermann, Thomas; Megino, Fernando Barreiro; Campana, Simone; Zhang, Donal; Tykhonov, Andrii; Serfon, Cedric; Oleynik, Danila; Petrosyan, Artem

    2012-01-01

    ATLAS has recorded almost 5PB of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 70PB is currently stored in the Worldwide LHC Computing Grid by ATLAS. All of this data is managed by the ATLAS Distributed Data Management system, called Don Quixote 2 (DQ2). DQ2 has evolved rapidly to help ATLAS Computing operations manage these large quantities of data across the many grid sites at which ATLAS runs and to help ATLAS physicists get access to this data. In this paper we describe new and improved DQ2 services: popularity; space monitoring and accounting; exclusion service; cleaning agents; deletion agents. We describe the experience of data management operation in ATLAS computing, showing how these services enable management of petabyte scale computing operations. We illustrate the coupling of data management services to other parts of the ATLAS computing infrastructure, in particular showing how feedback from the distributed analysis system in ATLAS has enabled dynamic placement of the most popular data, helping users and groups to analyse the increasing data volumes on the grid.

  10. Advances in service and operations for ATLAS data management

    Science.gov (United States)

    Stewart, Graeme A.; Garonne, Vincent; Lassnig, Mario; Molfetas, Angelos; Barisits, Martin; Zhang, Donal; Calvet, Ivan; Beermann, Thomas; Barreiro Megino, Fernando; Tykhonov, Andrii; Campana, Simone; Serfon, Cedric; Oleynik, Danila; Petrosyan, Artem; ATLAS Collaboration

    2012-06-01

    ATLAS has recorded almost 5PB of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 70PB is currently stored in the Worldwide LHC Computing Grid by ATLAS. All of this data is managed by the ATLAS Distributed Data Management system, called Don Quixote 2 (DQ2). DQ2 has evolved rapidly to help ATLAS Computing operations manage these large quantities of data across the many grid sites at which ATLAS runs and to help ATLAS physicists get access to this data. In this paper we describe new and improved DQ2 services: popularity; space monitoring and accounting; exclusion service; cleaning agents; deletion agents. We describe the experience of data management operation in ATLAS computing, showing how these services enable management of petabyte scale computing operations. We illustrate the coupling of data management services to other parts of the ATLAS computing infrastructure, in particular showing how feedback from the distributed analysis system in ATLAS has enabled dynamic placement of the most popular data, helping users and groups to analyse the increasing data volumes on the grid.

  11. Event visualization in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00211497; The ATLAS collaboration; Boudreau, Joseph; Konstantinidis, Nikolaos; Martyniuk, Alex; Moyse, Edward; Thomas, Juergen; Waugh, Ben; Yallup, David

    2017-01-01

    At the beginning, HEP experiments made use of photographical images both to record and store experimental data and to illustrate their findings. Then the experiments evolved and needed to find ways to visualize their data. With the availability of computer graphics, software packages to display event data and the detector geometry started to be developed. Here, an overview of the usage of event display tools in HEP is presented. Then the case of the ATLAS experiment is considered in more detail and two widely used event display packages are presented, Atlantis and VP1, focusing on the software technologies they employ, as well as their strengths, differences and their usage in the experiment: from physics analysis to detector development, and from online monitoring to outreach and communication. Towards the end, the other ATLAS visualization tools will be briefly presented as well. Future development plans and improvements in the ATLAS event display packages will also be discussed.

  12. Luminosity Measurements with the ATLAS Detector

    CERN Document Server

    Maettig, Stefan; Pauly, T

    For almost all measurements performed at the Large Hadron Collider (LHC) one crucial ingredient is the precise knowledge about the integrated luminosity. The determination and precision on the integrated luminosity has direct implications on any cross-section measurement, and its instantaneous measurement gives important feedback on the conditions at the experimental insertions and on the accelerator performance. ATLAS is one of the main experiments at the LHC. In order to provide an accurate and reliable luminosity determination, ATLAS uses a variety of different sub-detectors and algorithms that measure the luminosity simultaneously. One of these sub-detectors are the Beam Condition Monitors (BCM) that were designed to protect the ATLAS detector from potentially dangerous beam losses. Due to its fast readout and very clean signals this diamond detector is providing in addition since May 2011 the official ATLAS luminosity. This thesis describes the calibration and performance of the BCM as a luminosity detec...

  13. The Herschel ATLAS

    Science.gov (United States)

    Eales, S.; Dunne, L.; Clements, D.; Cooray, A.; De Zotti, G.; Dye, S.; Ivison, R.; Jarvis, M.; Lagache, G.; Maddox, S.; hide

    2010-01-01

    The Herschel ATLAS is the largest open-time key project that will be carried out on the Herschel Space Observatory. It will survey 570 sq deg of the extragalactic sky, 4 times larger than all the other Herschel extragalactic surveys combined, in five far-infrared and submillimeter bands. We describe the survey, the complementary multiwavelength data sets that will be combined with the Herschel data, and the six major science programs we are undertaking. Using new models based on a previous submillimeter survey of galaxies, we present predictions of the properties of the ATLAS sources in other wave bands.

  14. A Web-based Solution to Visualize Operational Monitoring Data in the Trigger and Data Acquisition System of the ATLAS Experiment at the LHC

    CERN Document Server

    Avolio, Giuseppe; The ATLAS collaboration; Lehmann Miotto, Giovanna; Soloviev, Igor

    2016-01-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components (about 3000 machines and more than 25000 applications) which, in a coordinated manner, provide the data-taking functionality of the overall system. During data taking runs, a huge flow of operational data is produced in order to constantly monitor the system and allow proper detection of anomalies or misbehaviors. In the ATLAS TDAQ system, operational data are archived and made available to applications by the P-Beast (Persistent Back-End for the Atlas Information System of TDAQ) service, implementing a custom time-series database. The possibility to efficiently visualize both real-time and historical operational data is a great asset for the online identification of problems and for any post-mortem analysis. This paper will present a web-based solution developed to achieve such a goal: the solution leverages the flexibili...

  15. A web-based solution to visualize operational monitoring data in the Trigger and Data Acquisition system of the ATLAS experiment at the LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00210941; The ATLAS collaboration; D'Ascanio, Matteo; Lehmann-Miotto, Giovanna; Soloviev, Igor

    2017-01-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider at CERN is composed of a large number of distributed hardware and software components (about 3000 computers and more than 25000 applications) which, in a coordinated manner, provide the data-taking functionality of the overall system. During data taking runs, a huge flow of operational data is produced in order to constantly monitor the system and allow proper detection of anomalies or misbehaviours. In the ATLAS trigger and data acquisition system, operational data are archived and made available to applications by the P-BEAST (Persistent Back-End for the Atlas Information System of TDAQ) service, implementing a custom time-series database. The possibility to efficiently visualize both realtime and historical operational data is a great asset facilitating both online identification of problems and post-mortem analysis. This paper will present a web-based solution developed to achieve such a goal: the solution le...

  16. Yucca Mountain Biological resources monitoring program

    International Nuclear Information System (INIS)

    1991-01-01

    The US Department of Energy (US DOE) is required by the Nuclear Waste Policy Act of 1982 (as amended in 1987) to study and characterize Yucca Mountain as a possible site for a geological repository for high-level radioactive waste. To ensure site characterization activities do not adversely affect the Yucca Mountain area, an environmental program, the Yucca Mountain Biological Resources Monitoring Program, has been implemented monitor and mitigate environmental impacts and to ensure activities comply with applicable environmental laws. Potential impacts to vegetation, small mammals, and the desert tortoise (an indigenous threatened species) are addressed, as are habitat reclamation, radiological monitoring, and compilation of baseline data. This report describes the program in Fiscal Years 1989 and 1990. 12 refs., 4 figs., 17 tabs

  17. A new program for particle physics: ATLAS in CERN

    International Nuclear Information System (INIS)

    Hubaut, F.

    2004-01-01

    LHC (large hadron collider) is being built in CERN and will enter into service in 2007. LHC is a proton collider: the 2 proton beams moving in opposite direction along a 27 km long circle will collide in 4 places and the maximum energy reached will be 14 TeV (in the mass center frame). 4 huge detectors (ATLAS, CMS, LHC-B, and ALICE) are being designed through important international collaborations, each one will fit a colliding site. ATLAS and CMS are all-particles detectors while LHC-B is dedicated to the physics of b-hadrons and ALICE will deal with heavy ions. LHC is expected to produce 40 million collisions every second and each collision will generate thousands of particles, so the huge amount of data generated requires the use of an efficient and reliable data acquisition system. Moreover the article describes the different parts of the ATLAS detector: the track detector, the calorimeter, the muon spectrometer and the superconducting central solenoid. (A.C.)

  18. Yucca Mountain Biological Resources Monitoring Program

    International Nuclear Information System (INIS)

    1992-01-01

    The US Department of Energy (DOE) is required by the Nuclear Waste Policy Act of 1982 (as amended in 1987) to study and characterize Yucca Mountain as a possible site for a geologic repository for high-level nuclear waste. During site characterization, the DOE will conduct a variety of geotechnical, geochemical, geological, and hydrological studies to determine the suitability of Yucca Mountain as a repository. To ensure that site characterization activities (SCA) do not adversely affect the Yucca Mountain area, an environmental program has been implemented to monitor and mitigate potential impacts and to ensure that activities comply with applicable environmental regulations. This report describes the activities and accomplishments during fiscal year 1991 (FY91) for six program areas within the Terrestrial Ecosystem component of the YMP environmental program. The six program areas are Site Characterization Activities Effects, Desert Tortoises, Habitat Reclamation, Monitoring and Mitigation, Radiological Monitoring, and Biological Support

  19. ATLAS Tile Calorimeter Readout Electronics Upgrade Program for the High Luminosity LHC

    CERN Document Server

    Cerqueira, A S; The ATLAS collaboration

    2013-01-01

    The Tile Calorimeter (TileCal) is the hadronic calorimeter covering the most central region of the ATLAS experiment at LHC. The TileCal readout consists of about 10000 channels. The ATLAS upgrade program is divided in three phases: The Phase 0 occurs during 2013-2014 and prepares the LHC to reach peak luminosities of 1034 cm2s-1; Phase 1, foreseen for 2018-1019, prepares the LHC for peak luminosity up to 2-3 x 1034 cm2s-1, corresponding to 55 to 80 interactions per bunch-crossing with 25 ns bunch interval; and Phase 2 is foreseen for 2022-2023, whereafter the peak luminosity will reach 5-7 x 1034 cm2s-1 (HL-LHC). With luminosity leveling, the average luminosity will increase with a factor 10. The main TileCal upgrade is focused on the HL-LHC period. The upgrade aims at replacing the majority of the on- and off-detector electronics so that all calorimeter signals are directly digitized and sent to the off-detector electronics in the counting room. All new electronics must be able to cope with the increased rad...

  20. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    Science.gov (United States)

    Campana, S.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  1. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    International Nuclear Information System (INIS)

    Campana, S

    2014-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R and D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  2. Ecological Monitoring and Compliance Program 2010 Report

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, D.J.; Anderson, D.C.; Hall, D.B.; Greger, P.D.; Ostler, W.K.

    2011-07-01

    The Ecological Monitoring and Compliance (EMAC) Program, funded through the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office (NNSA/NSO), monitors the ecosystem of the Nevada National Security Site (NNSS) and ensures compliance with laws and regulations pertaining to NNSS biota. This report summarizes the program’s activities conducted by National Security Technologies, LLC (NSTec), during calendar year 2010. Program activities included (a) biological surveys at proposed construction sites, (b) desert tortoise compliance, (c) ecosystem monitoring, (d) sensitive plant species monitoring, (e) sensitive and protected/regulated animal monitoring, (f) habitat restoration monitoring, and (g) monitoring of the Nonproliferation Test and Evaluation Complex (NPTEC). During 2010, all applicable laws, regulations, and permit requirements were met, enabling EMAC to achieve its intended goals and objectives.

  3. Ecological Monitoring and Compliance Program 2009 Report

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, J. Dennis; Anderson, David C.; Hall, Derek B.; Greger, Paul D.; Ostler, W. Kent

    2010-07-13

    The Ecological Monitoring and Compliance Program (EMAC), funded through the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office, monitors the ecosystem of the Nevada Test Site and ensures compliance with laws and regulations pertaining to NTS biota. This report summarizes the program’s activities conducted by National Security Technologies, LLC, during calendar year 2009. Program activities included (a) biological surveys at proposed construction sites, (b) desert tortoise compliance, (c) ecosystem mapping and data management, (d) sensitive plant species monitoring, (e) sensitive and protected/regulated animal monitoring, (f) habitat monitoring, (g) habitat restoration monitoring, and (h) monitoring of the Nonproliferation Test and Evaluation Complex. During 2009, all applicable laws, regulations, and permit requirements were met, enabling EMAC to achieve its intended goals and objectives.

  4. Diagnostic Systems and Resources utilization of the ATLAS High Level Trigger

    CERN Document Server

    Sidoti, A; The ATLAS collaboration; Ospanov, R

    2010-01-01

    Since the LHC started colliding protons in December 2009, the ATLAS trigger has operated very successfully with a collision rate which has increased by several orders of magnitude. The trigger monitoring and data quality infrastructure was essential to this success. We describe the software tools used to monitor the trigger system performance and assess the overall quality of the trigger selection during collisions running. ATLAS has broad physics goals which require a large number of different active triggers due to complex event topology, requiring quite sophisticated software structures and concepts. The trigger of the ATLAS experiment is built as a three level system. The first level is realized in hardware while the high level triggers (HLT) are software based and run on large PC farms. The trigger reduces the bunch crossing rate of 40 MHz, at design, to an average event rate of about 200 Hz for storage. Since the ATLAS detector is a general purpose detector, the trigger must be sensitive to a large numb...

  5. ATLAS Tier-3 within IFIC-Valencia analysis facility

    CERN Document Server

    Villaplana, M; The ATLAS collaboration; Fernández, A; Salt, J; Lamas, A; Fassi, F; Kaci, M; Oliver, E; Sánchez, J; Sánchez-Martínez, V

    2012-01-01

    The ATLAS Tier-3 at IFIC-Valencia is attached to a Tier-2 that has 50% of the Spanish Federated Tier-2 resources. In its design, the Tier-3 includes a GRID-aware part that shares some of the features of IFIC Tier-2 such as using Lustre as a file system. ATLAS users, 70% of IFIC users, also have the possibility of analysing data with a PROOF farm and storing them locally. In this contribution we discuss the design of the analysis facility as well as the monitoring tools we use to control and improve its performance. We also comment on how the recent changes in the ATLAS computing GRID model affect IFIC. Finally, how this complex system can coexist with the other scientific applications running at IFIC (non-ATLAS users) is presented.

  6. Ecological Monitoring and Compliance Program 2015 Report

    Energy Technology Data Exchange (ETDEWEB)

    Hall, Derek B. [National Security Technologies, LLC. (NSTec), Las Vegas, NV (United States); Ostler, W. Kent [National Security Technologies, LLC. (NSTec), Las Vegas, NV (United States); Anderson, David C. [National Security Technologies, LLC. (NSTec), Las Vegas, NV (United States); Greger, Paul D. [National Security Technologies, LLC. (NSTec), Las Vegas, NV (United States)

    2016-01-01

    The Ecological Monitoring and Compliance Program (EMAC), funded through the U.S. Department of Energy, National Nuclear Security Administration Nevada Field Office (NNSA/NFO), monitors the ecosystem of the Nevada National Security Site (NNSS) and ensures compliance with laws and regulations pertaining to NNSS biota. This report summarizes the program’s activities conducted by National Security Technologies, LLC (NSTec), during calendar year 2015. Program activities included (a) biological surveys at proposed activity sites, (b) desert tortoise compliance, (c) ecosystem monitoring, (d) sensitive plant species monitoring, (e) sensitive and protected/regulated animal monitoring, and (f) habitat restoration monitoring. During 2015, all applicable laws, regulations, and permit requirements were met, enabling EMAC to achieve its intended goals and objectives.

  7. Ecological Monitoring and Compliance Program 2016 Report

    Energy Technology Data Exchange (ETDEWEB)

    Hall, Derek [National Security Technologies, LLC. (NSTec), Mercury, NV (United States); Perry, Jeanette [National Security Technologies, LLC. (NSTec), Mercury, NV (United States); Ostler, W. Kent [National Security Technologies, LLC. (NSTec), Mercury, NV (United States)

    2017-09-06

    The Ecological Monitoring and Compliance Program (EMAC), funded through the U.S. Department of Energy, National Nuclear Security Administration Nevada Field Office (NNSA/NFO), monitors the ecosystem of the Nevada National Security Site (NNSS) and ensures compliance with laws and regulations pertaining to NNSS biota. This report summarizes the program’s activities conducted by National Security Technologies, LLC (NSTec), during calendar year 2016. Program activities included (a) biological surveys at proposed activity sites, (b) desert tortoise compliance, (c) ecosystem monitoring, (d) sensitive plant species monitoring, (e) sensitive and protected/regulated animal monitoring, and (f) habitat restoration monitoring. During 2016, all applicable laws, regulations, and permit requirements were met, enabling EMAC to achieve its intended goals and objectives.

  8. Yucca Mountain biological resources monitoring program

    International Nuclear Information System (INIS)

    1993-02-01

    The US Department of Energy (DOE) is required by the Nuclear Waste Policy Act of 1982 (as amended in 1987) to study and characterize Yucca Mountain as a potential site for a geologic repository for high-level nuclear waste. During site characterization, the DOE will conduct a variety of geotechnical, geochemical, geological, and hydrological studies to determine the suitability of Yucca Mountain as a potential repository. To ensure that site characterization activities (SCA) do not adversely affect the environment at Yucca Mountain, an environmental program has been implemented to monitor and mitigate potential impacts and ensure activities comply with applicable environmental regulations. This report describes the activities and accomplishments of EG ampersand G Energy Measurements, Inc. (EG ampersand G/EM) during fiscal year 1992 (FY92) for six program areas within the Terrestrial Ecosystem component of the YMP environmental program. The six program areas are Site Characterization Effects, Desert Tortoises, Habitat Reclamation, Monitoring and Mitigation, Radiological Monitoring, and Biological Support

  9. Heavy Ion Physics with the ATLAS Detector at the LHC

    International Nuclear Information System (INIS)

    Trzupek, A.

    2009-01-01

    The heavy-ion program at LHC will be pursued by three experiments including ATLAS, a multipurpose detector to study p + p collisions. A report on the potential of the ATLAS detector to uncover new physics in Pb + Pb collisions at energies thirty times larger than energy available at RHIC will be presented. Key aspects of the heavy-ion program of the ATLAS experiment, implied by measurements at RHIC, will be discussed. They include measurement capability of high-p T hadronic and electromagnetic probes, quarkonia as well as elliptic flow and other bulk phenomena. Measurements by the ATLAS experiment will provide crucial information about the formation of a quark-gluon plasma at the new energy scale accessible at the LHC. (author)

  10. Regional Environmental Monitoring and Assessment Program Data (REMAP)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Regional Environmental Monitoring and Assessment Program (REMAP) was initiated to test the applicability of the Environmental Monitoring and Assessment Program...

  11. ATLAS DQ2 Deletion Service

    International Nuclear Information System (INIS)

    Oleynik, Danila; Petrosyan, Artem; Garonne, Vincent; Campana, Simone

    2012-01-01

    The ATLAS Distributed Data Management project DQ2 is responsible for the replication, access and bookkeeping of ATLAS data across more than 100 distributed grid sites. It also enforces data management policies decided on by the collaboration and defined in the ATLAS computing model. The DQ2 Deletion Service is one of the most important DDM services. This distributed service interacts with 3rd party grid middleware and the DQ2 catalogues to serve data deletion requests on the grid. Furthermore, it also takes care of retry strategies, check-pointing transactions, load management and fault tolerance. In this paper special attention is paid to the technical details which are used to achieve the high performance of service, accomplished without overloading either site storage, catalogues or other DQ2 components. Special attention is also paid to the deletion monitoring service that allows operators a detailed view of the working system.

  12. Ecological Monitoring and Compliance Program 2012 Report

    Energy Technology Data Exchange (ETDEWEB)

    Hall, Derek B.; Anderson, David C.; Greger, Paul D.; Ostler, W. Kent; Hansen, Dennis J.

    2013-07-03

    The Ecological Monitoring and Compliance Program (EMAC), funded through the U.S. Department of Energy, National Nuclear Security Administration Nevada Field Office (NNSA/NFO, formerly Nevada Site Office), monitors the ecosystem of the Nevada National Security Site (NNSS) and ensures compliance with laws and regulations pertaining to NNSS biota. This report summarizes the program’s activities conducted by National Security Technologies, LLC (NSTec), during calendar year 2012. Program activities included (a) biological surveys at proposed construction sites, (b) desert tortoise compliance, (c) ecosystem monitoring, (d) sensitive plant species monitoring, (e) sensitive and protected/regulated animal monitoring, (f) habitat restoration monitoring, and (g) monitoring of the Nonproliferation Test and Evaluation Complex (NPTEC). During 2012, all applicable laws, regulations, and permit requirements were met, enabling EMAC to achieve its intended goals and objectives.

  13. 24 CFR 266.520 - Program monitoring and compliance.

    Science.gov (United States)

    2010-04-01

    ... AUTHORITIES HOUSING FINANCE AGENCY RISK-SHARING PROGRAM FOR INSURED AFFORDABLE MULTIFAMILY PROJECT LOANS Project Management and Servicing § 266.520 Program monitoring and compliance. HUD will monitor the...

  14. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins

    Science.gov (United States)

    Lambert, F.; Odier, J.; Fulachier, J.; ATLAS Collaboration

    2017-10-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring and administration systems, and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand.

  15. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins.

    CERN Document Server

    AUTHOR|(SzGeCERN)637120; The ATLAS collaboration; Odier, Jerome; Fulachier, Jerome

    2017-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring and administration systems, and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand.

  16. Mesure des champs de radiation dans le detecteur ATLAS et sa caverne avec les detecteurs au silicium a pixels ATLAS-MPX

    Science.gov (United States)

    Bouchami, Jihene

    The LHC proton-proton collisions create a hard radiation environment in the ATLAS detector. In order to quantify the effects of this environment on the detector performance and human safety, several Monte Carlo simulations have been performed. However, direct measurement is indispensable to monitor radiation levels in ATLAS and also to verify the simulation predictions. For this purpose, sixteen ATLAS-MPX devices have been installed at various positions in the ATLAS experimental and technical areas. They are composed of a pixelated silicon detector called MPX whose active surface is partially covered with converter layers for the detection of thermal, slow and fast neutrons. The ATLAS-MPX devices perform real-time measurement of radiation fields by recording the detected particle tracks as raster images. The analysis of the acquired images allows the identification of the detected particle types by the shapes of their tracks. For this aim, a pattern recognition software called MAFalda has been conceived. Since the tracks of strongly ionizing particles are influenced by charge sharing between adjacent pixels, a semi-empirical model describing this effect has been developed. Using this model, the energy of strongly ionizing particles can be estimated from the size of their tracks. The converter layers covering each ATLAS-MPX device form six different regions. The efficiency of each region to detect thermal, slow and fast neutrons has been determined by calibration measurements with known sources. The study of the ATLAS-MPX devices response to the radiation produced by proton-proton collisions at a center of mass energy of 7 TeV has demonstrated that the number of recorded tracks is proportional to the LHC luminosity. This result allows the ATLAS-MPX devices to be employed as luminosity monitors. To perform an absolute luminosity measurement and calibration with these devices, the van der Meer method based on the LHC beam parameters has been proposed. Since the ATLAS

  17. Ecological Monitoring and Compliance Program 2013 Report

    Energy Technology Data Exchange (ETDEWEB)

    Hall, Derek B. [National Security Technologies, LLC, Las Vegas, NV (United States); Anderson, David C. [National Security Technologies, LLC, Las Vegas, NV (United States); Greger, Paul D. [National Security Technologies, LLC, Las Vegas, NV (United States)

    2014-07-01

    The Ecological Monitoring and Compliance Program (EMAC), funded through the U.S. Department of Energy, National Nuclear Security Administration Nevada Field Office (NNSA/NFO, formerly Nevada Site Office), monitors the ecosystem of the Nevada National Security Site (NNSS) and ensures compliance with laws and regulations pertaining to NNSS biota. This report summarizes the program’s activities conducted by National Security Technologies, LLC (NSTec), during calendar year 2013. Program activities included (a) biological surveys at proposed activity sites, (b) desert tortoise compliance, (c) ecosystem monitoring, (d) sensitive plant species monitoring, (e) sensitive and protected/regulated animal monitoring, and (f) habitat restoration monitoring. During 2013, all applicable laws, regulations, and permit requirements were met, enabling EMAC to achieve its intended goals and objectives.

  18. Monitoring the Resistive Plate Chambers in the Muon Spectrometer of ATLAS.

    CERN Document Server

    Al-Qahtani, Shaikha

    2017-01-01

    A software was developed to monitor the resistive plate chambers. The purpose of the program is to detect any weak or dead chambers and locate them for repair. The first use of the program was able to spot several chambers with problems to be investigated.

  19. 100-N pilot project: Proposed consolidated groundwater monitoring program

    International Nuclear Information System (INIS)

    Borghese, J.V.; Hartman, M.J.; Lutrell, S.P.; Perkins, C.J.; Zoric, J.P.; Tindall, S.C.

    1996-11-01

    This report presents a proposed consolidated groundwater monitoring program for the 100-N Pilot Project. This program is the result of a cooperative effort between the Hanford Site contractors who monitor the groundwater beneath the 100-N Area. The consolidation of the groundwater monitoring programs is being proposed to minimize the cost, time, and effort necessary for groundwater monitoring in the 100-N Area, and to coordinate regulatory compliance activities. The integrity of the subprograms requirements remained intact during the consolidation effort. The purpose of this report is to present the proposed consolidated groundwater monitoring program and to summarize the process by which it was determined

  20. Nuclear Explosion Monitoring Research and Engineering Program - Strategic Plan

    Energy Technology Data Exchange (ETDEWEB)

    Casey, Leslie A. [DOE/NNSA

    2004-09-01

    The Department of Energy (DOE)/National Nuclear Security Administration (NNSA) Nuclear Explosion Monitoring Research and Engineering (NEM R&E) Program is dedicated to providing knowledge, technical expertise, and products to US agencies responsible for monitoring nuclear explosions in all environments and is successful in turning scientific breakthroughs into tools for use by operational monitoring agencies. To effectively address the rapidly evolving state of affairs, the NNSA NEM R&E program is structured around three program elements described within this strategic plan: Integration of New Monitoring Assets, Advanced Event Characterization, and Next-Generation Monitoring Systems. How the Program fits into the National effort and historical accomplishments are also addressed.

  1. Status of the ATLAS Pixel Detector at the LHC and its performance after three years of operation

    CERN Document Server

    Andreazza, A; The ATLAS collaboration

    2012-01-01

    The ATLAS Pixel Detector is the innermost detector of the ATLAS experiment at the Large Hadron Collider at CERN, providing high-resolution measurements of charged particle tracks in the high radiation environment close to the collision region. This capability is vital for the identification and measurement of proper decay times of long-lived particles such as b-hadrons, and thus vital for the ATLAS physics program. The detector provides hermetic coverage with three cylindrical layers and three layers of forward and backward pixel detectors. It consists of approximately 80 million pixels that are individually read out via chips bump-bonded to 1744 n-in-n silicon substrates. In this talk, results from the successful operation of the Pixel Detector at the LHC and its status after three years of operation will be presented, including monitoring, calibration procedures, timing optimization and detector performance. The detector performance is excellent: ~96 % of the pixels are operational, noise occupancy and hit ...

  2. EnviroAtlas - Acres of USDA Farm Service Agency Conservation Reserve Program land by 12-Digit HUC for the Conterminous United States.

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the acres of land enrolled in the US Department of Agriculture (USDA)'s Conservation Reserve Program (CRP). The CRP is administered by...

  3. Study of the Higgs boson discovery potential in the process $pp \\to H/A \\to \\mu^+\\mu^-/\\tau^+\\tau^-$ with the ATLAS detector

    CERN Document Server

    Dedes, Georgios

    2008-01-01

    In this thesis, the discovery potential of the ATLAS experiment at the Large Hadron Collider (LHC) at CERN for the heavy neutral Higgs bosons H/A of the Min- imal Supersymmetric extension of the Standard Model of particle physics (MSSM) in the decay channels H/A → τ + τ − → e/μ + X and H/A → μ+ μ− has been studied. The ATLAS detector is designed to study the full spectrum of the physics phenomena occuring in the proton-proton collisions at 14 TeV center-of-mass energy and to provide answers to the question of the origin of particle masses and of elec- troweak symmetry breaking. For the studies, the ATLAS muon spectrometer plays an important role. The spectrometer allows for a precise muon momentum measure- ment independently of other ATLAS subdetectors. The performance of the muon spectrometer depends strongly on the performance of the muon tracking detectors, the Monitored Drift Tube Chambers (MDT). Computer programs have been developed in order to test and verify the ATLAS muon spectrometer s...

  4. ATLAS and ultra high energy cosmic ray physics

    Directory of Open Access Journals (Sweden)

    Pinfold James

    2017-01-01

    Full Text Available After a brief introduction to extended air shower cosmic ray physics the current and future deployment of forward detectors at ATLAS is discussed along with the various aspects of the current and future ATLAS programs to explore hadronic physics. The emphasis is placed on those results and future plans that have particular relevance for high-energy, and ultra high-energy, cosmic ray physics. The possible use of ATLAS as an “underground” cosmic muon observatory is briefly considered.

  5. The monitoring and data quality assessment of the ATLAS liquid argon calorimeter

    CERN Document Server

    Simard, O

    2015-01-01

    The ATLAS experiment is designed to study the proton-proton ($pp$) collisions produced at the Large Hadron Collider (LHC) at CERN. Liquid argon (LAr) sampling calorimeters are used for all electromagnetic calorimetry in the pseudo-rapidity region $|\\eta|< 3.2$, as well as for hadronic calorimetry in the range $1.5 < |\\eta| < 4.9$. The electromagnetic calorimeters use lead as passive material and are characterized by an accordion geometry that allows a fast and uniform response without azimuthal gaps. Copper and tungsten were chosen as passive material for the hadronic calorimetry; while a classic parallel-plate geometry was adopted at large polar angles, an innovative design based on cylindrical electrodes with thin liquid argon gaps is employed at low angles, where the particle flux is higher. All detectors are housed in three cryostats maintained at about 88.5~K. The 182,468 cells are read out via front-end boards housed in on-detector crates that also contain monitoring, calibration, trigger and t...

  6. The monitoring and data quality assessment of the ATLAS liquid argon calorimeter

    CERN Document Server

    Simard, O; The ATLAS collaboration

    2014-01-01

    The ATLAS experiment is designed to study the proton-proton collisions produced at the Large Hadron Collider (LHC) at CERN. Liquid argon (LAr) sampling calorimeters are used for all electromagnetic calorimetry in the pseudo-rapidity region |η|< 3.2, as well as for hadronic calorimetry in the range 1.5<|η|<4.9. The electromagnetic calorimeters use lead as passive material and are characterized by an accordion geometry that allows a fast and uniform response without azimuthal gaps. Copper and tungsten were chosen as passive material for the hadronic calorimetry; while a classic parallel-plate geometry was adopted at large polar angles, an innovative design based on cylindrical electrodes with thin liquid argon gaps is employed for the coverage at low angles, where the particle flux is higher. All detectors are housed in three cryostats maintained at about 88.5K. The approximately 200K cells are read out via front-end boards housed in on-detector crates that also contain monitoring, calibration, trigg...

  7. ATLAS tile calorimeter cesium calibration control and analysis software

    International Nuclear Information System (INIS)

    Solovyanov, O; Solodkov, A; Starchenko, E; Karyukhin, A; Isaev, A; Shalanda, N

    2008-01-01

    An online control system to calibrate and monitor ATLAS Barrel hadronic calorimeter (TileCal) with a movable radioactive source, driven by liquid flow, is described. To read out and control the system an online software has been developed, using ATLAS TDAQ components like DVS (Diagnostic and Verification System) to verify the hardware before running, IS (Information Server) for data and status exchange between networked computers, and other components like DDC (DCS to DAQ Connection), to connect to PVSS-based slow control systems of Tile Calorimeter, high voltage and low voltage. A system of scripting facilities, based on Python language, is used to handle all the calibration and monitoring processes from hardware perspective to final data storage, including various abnormal situations. A QT based graphical user interface to display the status of the calibration system during the cesium source scan is described. The software for analysis of the detector response, using online data, is discussed. Performance of the system and first experience from the ATLAS pit are presented

  8. ATLAS tile calorimeter cesium calibration control and analysis software

    Energy Technology Data Exchange (ETDEWEB)

    Solovyanov, O; Solodkov, A; Starchenko, E; Karyukhin, A; Isaev, A; Shalanda, N [Institute for High Energy Physics, Protvino 142281 (Russian Federation)], E-mail: Oleg.Solovyanov@ihep.ru

    2008-07-01

    An online control system to calibrate and monitor ATLAS Barrel hadronic calorimeter (TileCal) with a movable radioactive source, driven by liquid flow, is described. To read out and control the system an online software has been developed, using ATLAS TDAQ components like DVS (Diagnostic and Verification System) to verify the hardware before running, IS (Information Server) for data and status exchange between networked computers, and other components like DDC (DCS to DAQ Connection), to connect to PVSS-based slow control systems of Tile Calorimeter, high voltage and low voltage. A system of scripting facilities, based on Python language, is used to handle all the calibration and monitoring processes from hardware perspective to final data storage, including various abnormal situations. A QT based graphical user interface to display the status of the calibration system during the cesium source scan is described. The software for analysis of the detector response, using online data, is discussed. Performance of the system and first experience from the ATLAS pit are presented.

  9. The ATLAS Tile Calorimeter DCS for Run 2

    CERN Document Server

    Pedro Martins, Filipe Manuel; The ATLAS collaboration

    2016-01-01

    TileCal is one of the ATLAS sub-detectors operating at the Large Hadron Collider (LHC), which is taking data since 2010. The Detector Control System (DCS) was developed to ensure the coherent and safe operation of the whole ATLAS detector. Seventy thousand (70000) parameters are used for control and monitoring purposes of TileCal, requiring an automated system. The TileCal DCS is mainly responsible for the control and monitoring of the high and low voltage systems but it also supervises the detector infrastructure (cooling and racks), calibration systems, data acquisition and safety. During the first period of data taking (Run 1, 2010-12) the TileCal DCS allowed a smooth detector operation and should continue to do so for the second period (Run 2) that started in 2015. The TileCal DCS was updated in order to cope with the hardware and software requirements for Run 2 operation. These updates followed the general ATLAS guidelines on the software and hardware upgrade but also the new requirements from the TileCa...

  10. The ATLAS Tile Calorimeter DCS for Run 2

    CERN Document Server

    Pedro Martins, Filipe Manuel; The ATLAS collaboration

    2016-01-01

    TileCal is one of the ATLAS subdetectors operating at the Large Hadron Collider (LHC), which is taking data since 2010. Seventy thousand (70000) parameters are used for control and monitoring purposes, requiring an automated system. The Detector Control System (DCS) was developed to ensure the coherent and safe operation of the whole ATLAS detector. The TileCal DCS is mainly responsible for the control and monitoring of the high and low voltage systems but it also supervises the detector infrastructure (cooling and racks), calibration systems, data acquisition and safety. During the first period of data taking (Run 1, 2010-12) the TileCal DCS allowed a smooth detector operation and should continue to do so for the second period (Run 2) that started in 2015. The TileCal DCS was updated in order to cope with the hardware and software requirements for Run 2 operation. These updates followed the general ATLAS guidelines on the software and hardware upgrade but also the new requirements from the TileCal detector. ...

  11. Characterization, Monitoring and Sensor Technology Integrated Program

    International Nuclear Information System (INIS)

    1993-01-01

    This booklet contains summary sheets that describe FY 1993 characterization, monitoring, and sensor technology (CMST) development projects. Currently, 32 projects are funded, 22 through the OTD Characterization, Monitoring, and Sensor Technology Integrated Program (CMST-IP), 8 through the OTD Program Research and Development Announcement (PRDA) activity managed by the Morgantown Energy Technology Center (METC), and 2 through Interagency Agreements (IAGs). This booklet is not inclusive of those CMST projects which are funded through Integrated Demonstrations (IDs) and other Integrated Programs (IPs). The projects are in six areas: Expedited Site Characterization; Contaminants in Soils and Groundwater; Geophysical and Hydrogeological Measurements; Mixed Wastes in Drums, Burial Grounds, and USTs; Remediation, D ampersand D, and Waste Process Monitoring; and Performance Specifications and Program Support. A task description, technology needs, accomplishments and technology transfer information is given for each project

  12. Advanced condition monitoring program for turbine system

    International Nuclear Information System (INIS)

    Ono, Shigetoshi

    2015-01-01

    It is important for utilities to achieve a stable operation in nuclear power plants. To achieve it, plant anomalies that affect a stable operation must be found out and eliminated. Therefore, the advanced condition monitoring program was developed. In this program, a sophisticated heat balance model based on the actual plant data is adopted to identify plant anomalies at an incipient stage and the symptoms of plant anomalies are found by heat balance changes from the model calculation. The model calculation results have shown precise prediction for actual plant parameters. Moreover, this program has the diagnostic engine that helps operators derive the cause of plant anomalies. By using this monitoring program, the component reliability in the turbine system can be periodically monitored and assessed, and as a result the stable operation of nuclear power plants can be achieved. (author)

  13. Development of a monitoring tool to validate trigger level analysis in the ATLAS experiment

    CERN Document Server

    Hahn, Artur

    2014-01-01

    This report summarizes my thirteen week summer student project at CERN from June 30th until September 26th of 2014. My task was to contribute to a monitoring tool for the ATLAS experiment, comparing jets reconstructed by the trigger to fully offline reconstructed and saved events by creating a set of insightful histograms to be used during run 2 of the Large Hadron Collider, planned to start in early 2015. The motivation behind this project is to validate the use of data taken solely from the high level trigger for analysis purposes. Once the code generating the plots was completed, it was tested on data collected during run 1 up to the year 2012 and Monte Carlo simulated events with center-of-mass energies ps = 8TeV and ps = 14TeV.

  14. Design and Implementation of the ATLAS Detector Control System

    CERN Document Server

    Boterenbrood, H; Cook, J; Filimonov, V; Hallgren, B I; Heubers, W P J; Khomoutnikov, V; Ryabov, Yu; Varela, F

    2004-01-01

    The overall dimensions of the ATLAS experiment and its harsh environment, due to radiation and magnetic field, represent new challenges for the implementation of the Detector Control System. It supervises all hardware of the ATLAS detector, monitors the infrastructure of the experiment, and provides information exchange with the LHC accelerator. The system must allow for the operation of the different ATLAS sub-detectors in stand-alone mode, as required for calibration and debugging, as well as the coherent and integrated operation of all sub-detectors for physics data taking. For this reason, the Detector Control System is logically arranged to map the hierarchical organization of the ATLAS detector. Special requirements are placed onto the ATLAS Detector Control System because of the large number of distributed I/O channels and of the inaccessibility of the equipment during operation. Standardization is a crucial issue for the design and implementation of the control system because of the large variety of e...

  15. Calorimetry triggering in ATLAS

    CERN Document Server

    Igonkina, O; Adragna, P; Aharrouche, M; Alexandre, G; Andrei, V; Anduaga, X; Aracena, I; Backlund, S; Baines, J; Barnett, B M; Bauss, B; Bee, C; Behera, P; Bell, P; Bendel, M; Benslama, K; Berry, T; Bogaerts, A; Bohm, C; Bold, T; Booth, J R A; Bosman, M; Boyd, J; Bracinik, J; Brawn, I, P; Brelier, B; Brooks, W; Brunet, S; Bucci, F; Casadei, D; Casado, P; Cerri, A; Charlton, D G; Childers, J T; Collins, N J; Conde Muino, P; Coura Torres, R; Cranmer, K; Curtis, C J; Czyczula, Z; Dam, M; Damazio, D; Davis, A O; De Santo, A; Degenhardt, J; Delsart, P A; Demers, S; Demirkoz, B; Di Mattia, A; Diaz, M; Djilkibaev, R; Dobson, E; Dova, M, T; Dufour, M A; Eckweiler, S; Ehrenfeld, W; Eifert, T; Eisenhandler, E; Ellis, N; Emeliyanov, D; Enoque Ferreira de Lima, D; Faulkner, P J W; Ferland, J; Flacher, H; Fleckner, J E; Flowerdew, M; Fonseca-Martin, T; Fratina, S; Fhlisch, F; Gadomski, S; Gallacher, M P; Garitaonandia Elejabarrieta, H; Gee, C N P; George, S; Gillman, A R; Goncalo, R; Grabowska-Bold, I; Groll, M; Gringer, C; Hadley, D R; Haller, J; Hamilton, A; Hanke, P; Hauser, R; Hellman, S; Hidvgi, A; Hillier, S J; Hryn'ova, T; Idarraga, J; Johansen, M; Johns, K; Kalinowski, A; Khoriauli, G; Kirk, J; Klous, S; Kluge, E-E; Koeneke, K; Konoplich, R; Konstantinidis, N; Kwee, R; Landon, M; LeCompte, T; Ledroit, F; Lei, X; Lendermann, V; Lilley, J N; Losada, M; Maettig, S; Mahboubi, K; Mahout, G; Maltrana, D; Marino, C; Masik, J; Meier, K; Middleton, R P; Mincer, A; Moa, T; Monticelli, F; Moreno, D; Morris, J D; Mller, F; Navarro, G A; Negri, A; Nemethy, P; Neusiedl, A; Oltmann, B; Olvito, D; Osuna, C; Padilla, C; Panes, B; Parodi, F; Perera, V J O; Perez, E; Perez Reale, V; Petersen, B; Pinzon, G; Potter, C; Prieur, D P F; Prokishin, F; Qian, W; Quinonez, F; Rajagopalan, S; Reinsch, A; Rieke, S; Riu, I; Robertson, S; Rodriguez, D; Rogriquez, Y; Rhr, F; Saavedra, A; Sankey, D P C; Santamarina, C; Santamarina Rios, C; Scannicchio, D; Schiavi, C; Schmitt, K; Schultz-Coulon, H C; Schfer, U; Segura, E; Silverstein, D; Silverstein, S; Sivoklokov, S; Sjlin, J; Staley, R J; Stamen, R; Stelzer, J; Stockton, M C; Straessner, A; Strom, D; Sushkov, S; Sutton, M; Tamsett, M; Tan, C L A; Tapprogge, S; Thomas, J P; Thompson, P D; Torrence, E; Tripiana, M; Urquijo, P; Urrejola, P; Vachon, B; Vercesi, V; Vorwerk, V; Wang, M; Watkins, P M; Watson, A; Weber, P; Weidberg, T; Werner, P; Wessels, M; Wheeler-Ellis, S; Whiteson, D; Wiedenmann, W; Wielers, M; Wildt, M; Winklmeier, F; Wu, X; Xella, S; Zhao, L; Zobernig, H; de Seixas, J M; dos Anjos, A; Asman, B; Özcan, E

    2009-01-01

    The ATLAS experiment is preparing for data taking at 14 TeV collision energy. A rich discovery physics program is being prepared in addition to the detailed study of Standard Model processes which will be produced in abundance. The ATLAS multi-level trigger system is designed to accept one event in 2 105 to enable the selection of rare and unusual physics events. The ATLAS calorimeter system is a precise instrument, which includes liquid Argon electro-magnetic and hadronic components as well as a scintillator-tile hadronic calorimeter. All these components are used in the various levels of the trigger system. A wide physics coverage is ensured by inclusively selecting events with candidate electrons, photons, taus, jets or those with large missing transverse energy. The commissioning of the trigger system is being performed with cosmic ray events and by replaying simulated Monte Carlo events through the trigger and data acquisition system.

  16. Long Term Resource Monitoring Program procedures: fish monitoring

    Science.gov (United States)

    Ratcliff, Eric N.; Glittinger, Eric J.; O'Hara, T. Matt; Ickes, Brian S.

    2014-01-01

    This manual constitutes the second revision of the U.S. Army Corps of Engineers’ Upper Mississippi River Restoration-Environmental Management Program (UMRR-EMP) Long Term Resource Monitoring Program (LTRMP) element Fish Procedures Manual. The original (1988) manual merged and expanded on ideas and recommendations related to Upper Mississippi River fish sampling presented in several early documents. The first revision to the manual was made in 1995 reflecting important protocol changes, such as the adoption of a stratified random sampling design. The 1995 procedures manual has been an important document through the years and has been cited in many reports and scientific manuscripts. The resulting data collected by the LTRMP fish component represent the largest dataset on fish within the Upper Mississippi River System (UMRS) with more than 44,000 collections of approximately 5.7 million fish. The goal of this revision of the procedures manual is to document changes in LTRMP fish sampling procedures since 1995. Refinements to sampling methods become necessary as monitoring programs mature. Possible refinements are identified through field experiences (e.g., sampling techniques and safety protocols), data analysis (e.g., planned and studied gear efficiencies and reallocations of effort), and technological advances (e.g., electronic data entry). Other changes may be required because of financial necessity (i.e., unplanned effort reductions). This version of the LTRMP fish monitoring manual describes the most current (2014) procedures of the LTRMP fish component.

  17. The ATLAS online High Level Trigger framework: Experience reusing offline software components in the ATLAS trigger

    International Nuclear Information System (INIS)

    Wiedenmann, Werner

    2010-01-01

    Event selection in the ATLAS High Level Trigger is accomplished to a large extent by reusing software components and event selection algorithms developed and tested in an offline environment. Many of these offline software modules are not specifically designed to run in a heavily multi-threaded online data flow environment. The ATLAS High Level Trigger (HLT) framework based on the GAUDI and ATLAS ATHENA frameworks, forms the interface layer, which allows the execution of the HLT selection and monitoring code within the online run control and data flow software. While such an approach provides a unified environment for trigger event selection across all of ATLAS, it also poses strict requirements on the reused software components in terms of performance, memory usage and stability. Experience of running the HLT selection software in the different environments and especially on large multi-node trigger farms has been gained in several commissioning periods using preloaded Monte Carlo events, in data taking periods with cosmic events and in a short period with proton beams from LHC. The contribution discusses the architectural aspects of the HLT framework, its performance and its software environment within the ATLAS computing, trigger and data flow projects. Emphasis is also put on the architectural implications for the software by the use of multi-core processors in the computing farms and the experiences gained with multi-threading and multi-process technologies.

  18. Development of the ATLAS simulation framework

    International Nuclear Information System (INIS)

    DellAcqua, A.; Stavrianakou, M.; Amako, K.; Kanzaki, J.; Morita, Y.; Murakami, K.; Sasaki; Kurashige, H.; Rimoldi, A.; Saeki, T.; Ueda, I.; Tanaka, S.; Yoshida, H.

    2001-01-01

    Object-oriented (OO) approach is the key technology to develop a software system in the LHC/ATLAS experiment. The authors developed a OO simulation framework based on the Geant4 general-purpose simulation toolkit. Because of complexity of simulation in ATLAS, the authors paid most attention to the scalability in the design. Although the first target to apply this framework is to implement the ATLAS full detector simulation program, there is no experiment-specific code in it, therefore it can be utilized for the development of any simulation package, not only for HEP experiments but also for various different research domains. The authors discuss our approach of design and implementation of the framework

  19. Model Stellar Atmospheres and Real Stellar Atmospheres and Status of the ATLAS12 Opacity Sampling Program and of New Programs for Rosseland and for Distribution Function Opacity

    Science.gov (United States)

    Kurucz, Robert L.

    1996-01-01

    I discuss errors in theory and in interpreting observations that are produced by the failure to consider resolution in space, time, and energy. I discuss convection in stellar model atmospheres and in stars. Large errors in abundances are possible such as the factor of ten error in the Li abundance for extreme Population II stars. Finally I discuss the variation of microturbulent velocity with depth, effective temperature, gravity, and abundance. These variations must be dealt with in computing models and grids and in any type of photometric calibration. I have also developed a new opacity-sampling version of my model atmosphere program called ATLAS12. It recognizes more than 1000 atomic and molecular species, each in up to 10 isotopic forms. It can treat all ions of the elements up through Zn and the first 5 ions of heavier elements up through Es. The elemental and isotopic abundances are treated as variables with depth. The fluxes predicted by ATLAS12 are not accurate in intermediate or narrow bandpass intervals because the sample size is too small. A special stripped version of the spectrum synthesis program SYNTHE is used to generate the surface flux for the converged model using the line data on CD-ROMs 1 and 15. ATLAS12 can be used to produce improved models for Am and Ap stars. It should be very useful for investigating diffusion effects in atmospheres. It can be used to model exciting stars for H II regions with abundances consistent with those of the H II region. These programs and line files will be distributed on CD-ROMs.

  20. Search for exotic physics with ATLAS

    CERN Document Server

    Delsart, Pierre-Antoine

    2006-01-01

    At the LHC, the program of research in particle physics beyond the Standard Model is extremely rich. With the ATLAS detector, besides SUSY mainstream studies, many exotic theoretical models will be investigated. They range from compositeness of fundamental fermions to extra dimension scenarii through GUT models and include many variants. I shall review some selected typical studies by the ATLAS collaboration on exotic physics, highlighting the discovery prospects and the recent analyses using the latest full detector simulations.

  1. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins

    CERN Document Server

    Lambert, Fabian; The ATLAS collaboration

    2016-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring system and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand. Moreover, we describe how to switch to a distant replica in case of downtime.

  2. Sandia National Laboratories California Environmental Monitoring Program Annual Report.

    Energy Technology Data Exchange (ETDEWEB)

    Holland, Robert C.

    2007-03-01

    The annual program report provides detailed information about all aspects of the SNL/CA Environmental Monitoring Program for a given calendar year. It functions as supporting documentation to the SNL/CA Environmental Management System Program Manual. The 2006 program report describes the activities undertaken during the past year, and activities planned in future years to implement the Environmental Monitoring Program, one of six programs that supports environmental management at SNL/CA.

  3. Radiation monitor training program at Rocky Flats

    International Nuclear Information System (INIS)

    Medina, L.C.; Kittinger, W.D.; Vogel, R.M.

    The Rocky Flats Radiation Monitor Training Program is tailored to train new health physics personnel in the field of radiation monitoring. The purpose of the prescribed materials and media is to be consistent in training in all areas of Rocky Flats radiation monitoring job involvement

  4. Establishing a national biological laboratory safety and security monitoring program.

    Science.gov (United States)

    Blaine, James W

    2012-12-01

    The growing concern over the potential use of biological agents as weapons and the continuing work of the Biological Weapons Convention has promoted an interest in establishing national biological laboratory biosafety and biosecurity monitoring programs. The challenges and issues that should be considered by governments, or organizations, embarking on the creation of a biological laboratory biosafety and biosecurity monitoring program are discussed in this article. The discussion focuses on the following questions: Is there critical infrastructure support available? What should be the program focus? Who should be monitored? Who should do the monitoring? How extensive should the monitoring be? What standards and requirements should be used? What are the consequences if a laboratory does not meet the requirements or is not willing to comply? Would the program achieve the results intended? What are the program costs? The success of a monitoring program can depend on how the government, or organization, responds to these questions.

  5. LUCID: the ATLAS Luminosity Detector

    CERN Document Server

    Fabbri, Laura; The ATLAS collaboration

    2018-01-01

    A precise measurement of luminosity is a key component of the ATLAS program: its uncertainty is a systematics for all cross-section measurements, from Standard Model processes to new discoveries, and for some precise measurements it can be dominant. To be predictive a precision compatible with PDF uncertainty ( 1-2%) is desired. LUCID (LUminosity Cherenkov Integrating Detector) is sensitive to charged particles generated by the pp collisions. It is the only ATLAS dedicated detector for this purpose and the referred one during the second run of LHC data taking.

  6. A Nuclear Physics Program at the ATLAS Experiment at the CERN Large Hadron Collider

    CERN Document Server

    Aronson, S H; Gordon, H; Leite, M; Le Vine, M J; Nevski, P; Takai, H; White, S; Cole, B; Nagle, J L

    2002-01-01

    The ATLAS collaboration has significant interest in the physics of ultra-relativistic heavy ion collisions. We submitted a Letter of Intent to the United States Department of Energy in March 2002. The following document is a slightly modified version of that LOI. More details are available at: http://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/SM/ions

  7. Cassini Tour Atlas Automated Generation

    Science.gov (United States)

    Grazier, Kevin R.; Roumeliotis, Chris; Lange, Robert D.

    2011-01-01

    During the Cassini spacecraft s cruise phase and nominal mission, the Cassini Science Planning Team developed and maintained an online database of geometric and timing information called the Cassini Tour Atlas. The Tour Atlas consisted of several hundreds of megabytes of EVENTS mission planning software outputs, tables, plots, and images used by mission scientists for observation planning. Each time the nominal mission trajectory was altered or tweaked, a new Tour Atlas had to be regenerated manually. In the early phases of Cassini s Equinox Mission planning, an a priori estimate suggested that mission tour designers would develop approximately 30 candidate tours within a short period of time. So that Cassini scientists could properly analyze the science opportunities in each candidate tour quickly and thoroughly so that the optimal series of orbits for science return could be selected, a separate Tour Atlas was required for each trajectory. The task of manually generating the number of trajectory analyses in the allotted time would have been impossible, so the entire task was automated using code written in five different programming languages. This software automates the generation of the Cassini Tour Atlas database. It performs with one UNIX command what previously took a day or two of human labor.

  8. Automating the personnel dosimeter monitoring program

    International Nuclear Information System (INIS)

    Compston, M.W.

    1982-12-01

    The personnel dosimetry monitoring program at the Portsmouth uranium enrichment facility has been improved by using thermoluminescent dosimetry to monitor for ionizing radiation exposure, and by automating most of the operations and all of the associated information handling. A thermoluminescent dosimeter (TLD) card, worn by personnel inside security badges, stores the energy of ionizing radiation. The dosimeters are changed-out periodically and are loaded 150 cards at a time into an automated reader-processor. The resulting data is recorded and filed into a useful form by computer programming developed for this purpose

  9. The North American Amphibian Monitoring Program. [abstract

    Science.gov (United States)

    Griffin, J.

    1998-01-01

    The North American Amphibian Monitoring Program has been under development for the past three years. The monitoring strategy for NAAMP has five main prongs: terrestrial salamander surveys, calling surveys, aquatic surveys, western surveys, and atlassing. Of these five, calling surveys were selected as one of the first implementation priorities due to their friendliness to volunteers of varying knowledge levels, relative low cost, and the fact that several groups had already pioneered the techniques involved. While some states and provinces had implemented calling surveys prior to NAAMP, like WI and IL, most states and provinces had little or no history of state/provincewide amphibian monitoring. Thus, the majority of calling survey programs were initiated in the past two years. To assess the progress of this pilot phase, a program review was conducted on the status of the NAAMP calling survey program, and the results of that review will be presented at the meeting. Topics to be discussed include: who is doing what where, extent of route coverage, the continuing random route discussions, quality assurance, strengths and weaknesses of calling surveys, reliability of data, and directions for the future. In addition, a brief overview of the DISPro project will be included. DISPro is a new amphibian monitoring program in National Parks, funded by the Demonstration of Intensive Sites Program (DISPro) through the EPA and NPS. It will begin this year at Big Bend and Shenandoah National Parks. The purpose of the DISPro Amphibian Project will be to investigate relationships between environmental factors and stressors and the distribution, abundance, and health of amphibians in these National Parks. At each Park, amphibian long-term monitoring protocols will be tested, distributions and abundance of amphibians will be mapped, and field research experiments will be conducted to examine stressor effects on amphibians (e.g., ultraviolet radiation, contaminants, acidification).

  10. Analysis and Implement of Broadcast Program Monitoring Data

    Directory of Open Access Journals (Sweden)

    Song Jin Bao

    2016-01-01

    Full Text Available With the rapid development of the radio and TV industry and the implementation of INT (the integration of telecommunications networks, cable TV networks and the Internet, the contents of programs and advertisements is showing massive, live and interactive trends. In order to meet the security of radio and television, the broadcast of information have to be controlled and administered. In order to master the latest information of public opinion trends through radio and television network, it is necessary research the specific industry applications of broadcast program monitoring. In this paper, the importance of broadcast monitoring in public opinion analysis is firstly analysed. The monitoring radio and television programs broadcast system architecture is proposed combining with the practice, focusing on the technical requirements and implementation process of program broadcast, advertisement broadcast and TV station broadcast monitoring. The more efficient information is generated through statistical analysis, which provides data analysis for radio and television public opinion analysis.

  11. Calorimetry triggering in ATLAS

    International Nuclear Information System (INIS)

    Igonkina, O; Achenbach, R; Andrei, V; Adragna, P; Aharrouche, M; Bauss, B; Bendel, M; Alexandre, G; Anduaga, X; Aracena, I; Backlund, S; Bogaerts, A; Baines, J; Barnett, B M; Bee, C; P, Behera; Bell, P; Benslama, K; Berry, T; Bohm, C

    2009-01-01

    The ATLAS experiment is preparing for data taking at 14 TeV collision energy. A rich discovery physics program is being prepared in addition to the detailed study of Standard Model processes which will be produced in abundance. The ATLAS multi-level trigger system is designed to accept one event in 2 | 10 5 to enable the selection of rare and unusual physics events. The ATLAS calorimeter system is a precise instrument, which includes liquid Argon electro-magnetic and hadronic components as well as a scintillator-tile hadronic calorimeter. All these components are used in the various levels of the trigger system. A wide physics coverage is ensured by inclusively selecting events with candidate electrons, photons, taus, jets or those with large missing transverse energy. The commissioning of the trigger system is being performed with cosmic ray events and by replaying simulated Monte Carlo events through the trigger and data acquisition system.

  12. Calorimetry Triggering in ATLAS

    International Nuclear Information System (INIS)

    Igonkina, O.; Achenbach, R.; Adragna, P.; Aharrouche, M.; Alexandre, G.; Andrei, V.; Anduaga, X.; Aracena, I.; Backlund, S.; Baines, J.; Barnett, B.M.; Bauss, B.; Bee, C.; Behera, P.; Bell, P.; Bendel, M.; Benslama, K.; Berry, T.; Bogaerts, A.; Bohm, C.; Bold, T.; Booth, J.R.A.; Bosman, M.; Boyd, J.; Bracinik, J.; Brawn, I.P.; Brelier, B.; Brooks, W.; Brunet, S.; Bucci, F.; Casadei, D.; Casado, P.; Cerri, A.; Charlton, D.G.; Childers, J.T.; Collins, N.J.; Conde Muino, P.; Coura Torres, R.; Cranmer, K.; Curtis, C.J.; Czyczula, Z.; Dam, M.; Damazio, D.; Davis, A.O.; De Santo, A.; Degenhardt, J.

    2011-01-01

    The ATLAS experiment is preparing for data taking at 14 TeV collision energy. A rich discovery physics program is being prepared in addition to the detailed study of Standard Model processes which will be produced in abundance. The ATLAS multi-level trigger system is designed to accept one event in 2/10 5 to enable the selection of rare and unusual physics events. The ATLAS calorimeter system is a precise instrument, which includes liquid Argon electro-magnetic and hadronic components as well as a scintillator-tile hadronic calorimeter. All these components are used in the various levels of the trigger system. A wide physics coverage is ensured by inclusively selecting events with candidate electrons, photons, taus, jets or those with large missing transverse energy. The commissioning of the trigger system is being performed with cosmic ray events and by replaying simulated Monte Carlo events through the trigger and data acquisition system.

  13. Calorimetry triggering in ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Igonkina, O [Nikhef National Institute for Subatomic Physics, Amsterdam (Netherlands); Achenbach, R; Andrei, V [Kirchhoff Institut fuer Physik, Universitaet Heidelberg, Heidelberg (Germany); Adragna, P [Physics Department, Queen Mary, University of London, London (United Kingdom); Aharrouche, M; Bauss, B; Bendel, M [Institut fr Physik, Universitt Mainz, Mainz (Germany); Alexandre, G [Section de Physique, Universite de Geneve, Geneva (Switzerland); Anduaga, X [Universidad Nacional de La Plata, La Plata (Argentina); Aracena, I [Stanford Linear Accelerator Center (SLAC), Stanford (United States); Backlund, S; Bogaerts, A [European Laboratory for Particle Physics (CERN), Geneva (Switzerland); Baines, J; Barnett, B M [STFC Rutherford Appleton Laboratory, Harwell Science and Innovation Campus, Didcot, Oxon (United Kingdom); Bee, C [Centre de Physique des Particules de Marseille, IN2P3-CNRS, Marseille (France); P, Behera [Iowa State University, Ames, Iowa (United States); Bell, P [School of Physics and Astronomy, University of Manchester, Manchester (United Kingdom); Benslama, K [University of Regina, Regina (Canada); Berry, T [Department of Physics, Royal Holloway and Bedford New College, Egham (United Kingdom); Bohm, C [Fysikum, Stockholm University, Stockholm (Sweden)

    2009-04-01

    The ATLAS experiment is preparing for data taking at 14 TeV collision energy. A rich discovery physics program is being prepared in addition to the detailed study of Standard Model processes which will be produced in abundance. The ATLAS multi-level trigger system is designed to accept one event in 2 | 10{sup 5} to enable the selection of rare and unusual physics events. The ATLAS calorimeter system is a precise instrument, which includes liquid Argon electro-magnetic and hadronic components as well as a scintillator-tile hadronic calorimeter. All these components are used in the various levels of the trigger system. A wide physics coverage is ensured by inclusively selecting events with candidate electrons, photons, taus, jets or those with large missing transverse energy. The commissioning of the trigger system is being performed with cosmic ray events and by replaying simulated Monte Carlo events through the trigger and data acquisition system.

  14. TP Atlas: integration and dissemination of advances in Targeted Proteins Research Program (TPRP)-structural biology project phase II in Japan.

    Science.gov (United States)

    Iwayanagi, Takao; Miyamoto, Sei; Konno, Takeshi; Mizutani, Hisashi; Hirai, Tomohiro; Shigemoto, Yasumasa; Gojobori, Takashi; Sugawara, Hideaki

    2012-09-01

    The Targeted Proteins Research Program (TPRP) promoted by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan is the phase II of structural biology project (2007-2011) following the Protein 3000 Project (2002-2006) in Japan. While the phase I Protein 3000 Project put partial emphasis on the construction and maintenance of pipelines for structural analyses, the TPRP is dedicated to revealing the structures and functions of the targeted proteins that have great importance in both basic research and industrial applications. To pursue this objective, 35 Targeted Proteins (TP) Projects selected in the three areas of fundamental biology, medicine and pharmacology, and food and environment are tightly collaborated with 10 Advanced Technology (AT) Projects in the four fields of protein production, structural analyses, chemical library and screening, and information platform. Here, the outlines and achievements of the 35 TP Projects are summarized in the system named TP Atlas. Progress in the diversified areas is described in the modules of Graphical Summary, General Summary, Tabular Summary, and Structure Gallery of the TP Atlas in the standard and unified format. Advances in TP Projects owing to novel technologies stemmed from AT Projects and collaborative research among TP Projects are illustrated as a hallmark of the Program. The TP Atlas can be accessed at http://net.genes.nig.ac.jp/tpatlas/index_e.html .

  15. Characterization monitoring & sensor technology crosscutting program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-08-01

    The purpose of the Characterization, Monitoring, and Sensor Technology Crosscutting Program (CMST-CP) is to deliver appropriate characterization, monitoring, and sensor technology (CMST) to the OFfice of Waste Management (EM-30), the Office of Environmental Restoration (EM-40), and the Office of Facility Transition and Management (EM-60).

  16. The Savannah River Site's Groundwater Monitoring Program

    Energy Technology Data Exchange (ETDEWEB)

    1991-06-18

    This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted in the fourth quarter of 1990. It includes the analytical data, field data, well activity data, and other documentation for this program, provides a record of the program's activities and rationale, and serves as an official document of the analytical results. The groundwater monitoring program includes the following activities: installation, maintenance, and abandonment of monitoring wells, environmental soil borings, development of the sampling and analytical schedule, collection and analyses of groundwater samples, review of analytical and other data, maintenance of the databases containing groundwater monitoring data, quality assurance (QA) evaluations of laboratory performance, and reports of results to waste-site facility custodians and to the Environmental Protection Section (EPS) of EPD.

  17. Unregulated Contaminant Monitoring Program Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPA uses the Unregulated Contaminant Monitoring (UCM) program to collect data for contaminants suspected to be present in drinking water, but that do not have...

  18. Physics with Tau Lepton Final States in ATLAS

    Directory of Open Access Journals (Sweden)

    Pingel Almut M.

    2013-05-01

    Full Text Available The ATLAS detector records collisions from two high-energetic proton beams circulating in the LHC. An integral part of the ATLAS physics program are analyses with tau leptons in the final state. Here an overview is given over the studies done in ATLAS with hadronically-decaying final state tau leptons: Standard Model cross-section measurements of Z → ττ, W → τν and tt̅ → bb̅ e/μν τhadν; τ polarization measurements in W → τν decays; Higgs searches and various searches for physics beyond the Standard Model.

  19. Design and implementation of the object-oriented fast simulation program for the ATLAS experiment and its use to determine the discovery potential of the Higgs Boson via the channel h- > ZZ- > bbl+l-

    CERN Document Server

    Steward, Richard M

    2004-01-01

    The design and implementation of the object-oriented fast simulation program Atlfast is described for the ATLAS experiment at the CERN particle physics laboratory in Switzerland. Fast simulations use parametrised energy and momentum smearing in order to recreate the detection efficiency and particle identification of a real experimental detector, without the time-consuming computation required for full detector simulation. Additionally, an object-oriented program for performing user-defined physics analyses is described. This program is released for general use by the ATLAS collaboration and is designed for use with, but not restricted to, physics output from the Atlfast fast simulation program. These programs are demonstrated in a physics study of the feasibility of discovering the Higgs boson at the ATLAS experiment, using the discovery channel ho > Z Z * > bb l+l via weak vector boson fusion in the mass range 150 GeV - 200 GeV. It is found that this channel does not significantly increase the discovery pot...

  20. Digital atlas of fetal brain MRI

    Energy Technology Data Exchange (ETDEWEB)

    Chapman, Teresa; Weinberger, E. [Department of Radiology, Seattle Children' s Hospital, Seattle, WA (United States); Matesan, Manuela [University of Washington, Department of Radiology, Seattle, WA (United States); Bulas, Dorothy I. [Division of Diagnostic Imaging and Radiology, Children' s National Medical Center, Washington, DC (United States)

    2010-02-15

    Fetal MRI can be performed in the second and third trimesters. During this time, the fetal brain undergoes profound structural changes. Interpretation of appropriate development might require comparison with normal age-based models. Consultation of a hard-copy atlas is limited by the inability to compare multiple ages simultaneously. To provide images of normal fetal brains from weeks 18 through 37 in a digital format that can be reviewed interactively. This will facilitate recognition of abnormal brain development. T2-W images for the atlas were obtained from fetal MR studies of normal brains scanned for other indications from 2005 to 2007. Images were oriented in standard axial, coronal and sagittal projections, with laterality established by situs. Gestational age was determined by last menstrual period, earliest US measurements and sonogram performed on the same day as the MR. The software program used for viewing the atlas, written in C, permits linked scrolling and resizing the images. Simultaneous comparison of varying gestational ages is permissible. Fetal brain images across gestational ages 18 to 37 weeks are provided as an interactive digital atlas and are available for free download. Improved interpretation of fetal brain abnormalities can be facilitated by the use of digital atlas cataloging of the normal changes throughout fetal development. Here we provide a description of the atlas and a discussion of normal fetal brain development. (orig.)

  1. Digital atlas of fetal brain MRI

    International Nuclear Information System (INIS)

    Chapman, Teresa; Weinberger, E.; Matesan, Manuela; Bulas, Dorothy I.

    2010-01-01

    Fetal MRI can be performed in the second and third trimesters. During this time, the fetal brain undergoes profound structural changes. Interpretation of appropriate development might require comparison with normal age-based models. Consultation of a hard-copy atlas is limited by the inability to compare multiple ages simultaneously. To provide images of normal fetal brains from weeks 18 through 37 in a digital format that can be reviewed interactively. This will facilitate recognition of abnormal brain development. T2-W images for the atlas were obtained from fetal MR studies of normal brains scanned for other indications from 2005 to 2007. Images were oriented in standard axial, coronal and sagittal projections, with laterality established by situs. Gestational age was determined by last menstrual period, earliest US measurements and sonogram performed on the same day as the MR. The software program used for viewing the atlas, written in C, permits linked scrolling and resizing the images. Simultaneous comparison of varying gestational ages is permissible. Fetal brain images across gestational ages 18 to 37 weeks are provided as an interactive digital atlas and are available for free download. Improved interpretation of fetal brain abnormalities can be facilitated by the use of digital atlas cataloging of the normal changes throughout fetal development. Here we provide a description of the atlas and a discussion of normal fetal brain development. (orig.)

  2. Danish integrated antimicrobial in resistance monitoring and research program

    DEFF Research Database (Denmark)

    Hammerum, Anette Marie; Heuer, Ole Eske; Emborg, Hanne-Dorthe

    2007-01-01

    a systematic and continuous monitoring program of antimicrobial drug consumption and antimicrobial agent resistance in animals, food, and humans, the Danish Integrated Antimicrobial Resistance Monitoring and Research Program (DANMAP). Monitoring of antimicrobial drug resistance and a range of research......Resistance to antimicrobial agents is an emerging problem worldwide. Awareness of the undesirable consequences of its widespread occurrence has led to the initiation of antimicrobial agent resistance monitoring programs in several countries. In 1995, Denmark was the first country to establish...... activities related to DANMAP have contributed to restrictions or bans of use of several antimicrobial agents in food animals in Denmark and other European Union countries....

  3. Multiple brain atlas database and atlas-based neuroimaging system.

    Science.gov (United States)

    Nowinski, W L; Fang, A; Nguyen, B T; Raphel, J K; Jagannathan, L; Raghavan, R; Bryan, R N; Miller, G A

    1997-01-01

    For the purpose of developing multiple, complementary, fully labeled electronic brain atlases and an atlas-based neuroimaging system for analysis, quantification, and real-time manipulation of cerebral structures in two and three dimensions, we have digitized, enhanced, segmented, and labeled the following print brain atlases: Co-Planar Stereotaxic Atlas of the Human Brain by Talairach and Tournoux, Atlas for Stereotaxy of the Human Brain by Schaltenbrand and Wahren, Referentially Oriented Cerebral MRI Anatomy by Talairach and Tournoux, and Atlas of the Cerebral Sulci by Ono, Kubik, and Abernathey. Three-dimensional extensions of these atlases have been developed as well. All two- and three-dimensional atlases are mutually preregistered and may be interactively registered with an actual patient's data. An atlas-based neuroimaging system has been developed that provides support for reformatting, registration, visualization, navigation, image processing, and quantification of clinical data. The anatomical index contains about 1,000 structures and over 400 sulcal patterns. Several new applications of the brain atlas database also have been developed, supported by various technologies such as virtual reality, the Internet, and electronic publishing. Fusion of information from multiple atlases assists the user in comprehensively understanding brain structures and identifying and quantifying anatomical regions in clinical data. The multiple brain atlas database and atlas-based neuroimaging system have substantial potential impact in stereotactic neurosurgery and radiotherapy by assisting in visualization and real-time manipulation in three dimensions of anatomical structures, in quantitative neuroradiology by allowing interactive analysis of clinical data, in three-dimensional neuroeducation, and in brain function studies.

  4. HappyFace-progress and future development for the ATLAS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kawamura, Gen; Magradze, Erekle; Musheghyan, Haykuhi; Nadal, Jordi; Quadt, Arnulf; Rzehorz, Gerhard [II. Physikalisches Institut, Georg-August-Universitat (Germany); Collaboration: ATLAS-Collaboration

    2015-07-01

    Nowadays, the HappyFace project aggregates, processes and stores information from different grid monitoring resources as well as from the grid system itself into the common database and displays status information through a single interface. The new implementation and architecture of HappyFace, the so-called grid-enabled HappyFace, provides direct access to the grid infrastructure. Different grid-enabled modules, to view datasets of the ATLAS Distributed Data Management system (DDM), to connect to the Ganga job monitoring system and to check the performance of grid transfers among the grid sites have been implemented. The new HappyFace system has been successfully integrated. It now displays the information and the status of both the monitoring resources and the direct access to the grid user applications and the grid collective services in the ATLAS computing system.

  5. MPX Detectors as LHC Luminosity Monitor

    CERN Document Server

    Sopczak, Andre; Asbah, Nedaa; Bergmann, Benedikt; Bekhouche, Khaled; Caforio, Davide; Campbell, Michael; Heijne, Erik; Leroy, Claude; Lipniacka, Anna; Nessi, Marzio; Pospisil, Stanislav; Seifert, Frank; Solc, Jaroslav; Soueid, Paul; Suk, Michal; Turecek, Daniel; Vykydal, Zdenek

    2015-01-01

    A network of 16 Medipix-2 (MPX) silicon pixel devices was installed in the ATLAS detector cavern at CERN. It was designed to measure the composition and spectral characteristics of the radiation field in the ATLAS experiment and its surroundings. This study demonstrates that the MPX network can also be used as a self-sufficient luminosity monitoring system. The MPX detectors collect data independently of the ATLAS data-recording chain, and thus they provide independent measurements of the bunch-integrated ATLAS/LHC luminosity. In particular, the MPX detectors located close enough to the primary interaction point are used to perform van der Meer calibration scans with high precision. Results from the luminosity monitoring are presented for 2012 data taken at sqrt(s) = 8 TeV proton-proton collisions. The characteristics of the LHC luminosity reduction rate are studied and the effects of beam-beam (burn-off) and beam-gas (single bunch) interactions are evaluated. The systematic variations observed in the MPX lum...

  6. Program of environmental radiological monitoring

    International Nuclear Information System (INIS)

    2005-11-01

    This Regulation refers to the requirement of the Regulation CNEN-NN.3.01, 'Basic Act of Radiological Protection', as expressed in the section 5.14, related to the Program of Environmental Radiological Monitoring (PMRA)

  7. Ecological Monitoring and Compliance Program Fiscal Year 1999 Report

    Energy Technology Data Exchange (ETDEWEB)

    Cathy A. Wills

    1999-12-01

    The Ecological and Compliance program, funded through the U. S. Department of Energy, Nevada Operations Office, monitors the ecosystem of the Nevada Test Site (NTS) and ensures compliance with laws and regulations pertaining to NTS biota. This report summarizes the program's activities conducted by Bechtel Nevada during fiscal year 1999. Program activities included: (1) biological surveys at proposed construction sites (2) desert tortoise compliance (3) ecosystem mapping (4) sensitive species and unique habitat monitoring and (5) biological monitoring at the HAZMAT Spill Center.

  8. ATLAS @ LHC: status and recent results

    CERN Document Server

    McPherson, Robert; The ATLAS collaboration

    2017-01-01

    The status and data taking summary of the ATLAS experiment at the CERN Large Hadron Collider is reviewed. Recent physics analysis results are presented, and the detector upgrade program is briefly summarized.

  9. Machinery Vibration Monitoring Program at the Savannah River Site

    International Nuclear Information System (INIS)

    Potvin, M.M.

    1990-01-01

    The Reactor Maintenance's Machinery Vibration Monitoring Program (MVMP) plays an essential role in ensuring the safe operation of the three Production Reactors at the Westinghouse Savannah River Company (WRSC) Savannah River Site (SRS). This program has increased machinery availability and reduced maintenance cost by the early detection and determination of machinery problems. This paper presents the Reactor Maintenance's Machinery Vibration Monitoring Program, which has been documented based on Electric Power Research Institute's (EPRI) NP-5311, Utility Machinery Monitoring Guide, and some examples of the successes that it has enjoyed

  10. Review of present groundwater monitoring programs at the Nevada Test Site

    International Nuclear Information System (INIS)

    Hershey, R.L.; Gillespie, D.

    1993-09-01

    Groundwater monitoring at the Nevada Test Site (NTS) is conducted to detect the presence of radionuclides produced by underground nuclear testing and to verify the quality and safety of groundwater supplies as required by the State of Nevada and federal regulations, and by U.S. Department of Energy (DOE) Orders. Groundwater is monitored at water-supply wells and at other boreholes and wells not specifically designed or located for traditional groundwater monitoring objectives. Different groundwater monitoring programs at the NTS are conducted by several DOE Nevada Operations Office (DOE/NV) contractors. Presently, these individual groundwater monitoring programs have not been assessed or administered under a comprehensive planning approach. Redundancy exists among the programs in both the sampling locations and the constituents analyzed. Also, sampling for certain radionuclides is conducted more frequently than required. The purpose of this report is to review the existing NTS groundwater monitoring programs and make recommendations for modifying the programs so a coordinated, streamlined, and comprehensive monitoring effort may be achieved by DOE/NV. This review will be accomplished in several steps. These include: summarizing the present knowledge of the hydrogeology of the NTS and the potential radionuclide source areas for groundwater contamination; reviewing the existing groundwater monitoring programs at the NTS; examining the rationale for monitoring and the constituents analyzed; reviewing the analytical methods used to quantify tritium activity; discussing monitoring network design criteria; and synthesizing the information presented and making recommendations based on the synthesis. This scope of work was requested by the DOE/NV Hydrologic Resources Management Program (HRMP) and satisfies the 1993 (fiscal year) HRMP Groundwater Monitoring Program Review task

  11. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; Berghaus, Frank; Brasolin, Franco; Cordeiro, Cristovao; Desmarais, Ron; Field, Laurence; Gable, Ian; Giordano, Domenico; Di Girolamo, Alessandro; Hover, John; Leblanc, Matthew Edgar; Love, Peter; Paterson, Michael; Sobie, Randall; Zaytsev, Alexandr

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This paper describes the overall evolution of cloud computing in ATLAS. The current status of the virtual machine (VM) management systems used for harnessing infrastructure as a service (IaaS) resources are discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for ma...

  12. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Berghaus, Frank; Love, Peter; Leblanc, Matthew Edgar; Di Girolamo, Alessandro; Paterson, Michael; Gable, Ian; Sobie, Randall; Field, Laurence

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This work will describe the overall evolution of cloud computing in ATLAS. The current status of the VM management systems used for harnessing IAAS resources will be discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for managing VM images across multiple clouds, ...

  13. 78 FR 50399 - Spectrum Monitoring Pilot Program

    Science.gov (United States)

    2013-08-19

    ... 130809703-3703-01] RIN 0660-XC007 Spectrum Monitoring Pilot Program AGENCY: National Telecommunications and... National Telecommunications and Information Administration (NTIA) to design and conduct a pilot program to... investment for a two-year pilot program to determine the benefits of an automated spectrum measurement and...

  14. Commissioning of the ATLAS Inner Detector with cosmic rays

    CERN Document Server

    Klinkby, E

    2008-01-01

    The tracking of the ATLAS experiment is performed by the Inner Detector which has recently been installed in its final position. Various parts of the detector have been commissioned using cosmic rays both on the surface and in the ATLAS pit. The different calibration, alignment and monitoring methods have been tested as well as the handling of the conditions data. Both real and simulated cosmic events are reconstructed using the full ATLAS software chain, with only minor modifications to account for the lack of timing of cosmics events, the lack of magnetic field and to remove any vertex requirements in the track fitters. Results so far show that the Inner Detector performs within expectations with respect to noise, hit efficiency and track resolution.

  15. High-performance scalable Information Service for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Hauser, R

    2012-01-01

    The ATLAS experiment is being operated by highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to access the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS TDAQ project. The IS provides high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about hundred gigabytes of information which is being constantly updated with the update interval varying from a second to few tens of seconds. IS ...

  16. Jet calibration in the ATLAS experiment at LHC

    CERN Document Server

    Francavilla, P

    2009-01-01

    Jets produced in the hadronisation of quarks and gluons play a central role in the rich physics program that will be covered by the ATLAS experiment at the LHC, and are central elements of the signature for many physics channels. A well understood energy scale, which for some processes demands an uncertainty in the energy scale of order 1%, is a prerequisite. Moreover, in early data we face the challenge of dealing with the unexpected issues of a brand new detector in an unexplored energy domain. The ATLAS collaboration is carrying out a program to revisit the jet calibration strategies used in earlier hadron-collider experiments and develop a strategy which takes into account the new experimental problems introduced from higher measurement precision and from the LHC environment. The ATLAS calorimeter is intrinsically non-compensating and we will discuss the use of different offline approaches based on cell energy density and jet topology to correct the linearity response while improving the resolution. In ad...

  17. New Persistent Back-End for the ATLAS Online Information Service

    CERN Document Server

    Soloviev, I; The ATLAS collaboration

    2014-01-01

    The Trigger and Data Acquisition (TDAQ) and detector systems of the ATLAS experiment deploy more than 3000 computers, running more than 15000 concurrent processes, to perform the selection, recording and monitoring of the proton collisions data in ATLAS. Most of these processes produce and share operational monitoring data used for inter-process communication and analysis of the systems. Few of these data are archived by dedicated applications into conditions and histogram databases. The rest of the data remained transient and lost at the end of a data taking session. To save these data for later, offline analysis of the quality of data taking and to help investigating the behavior of the system by experts, the first prototype of a new Persistent Back-End for the Atlas Information System of TDAQ (P-BEAST) was developed and deployed in the second half of 2012. The modern, distributed, and Java-based Cassandra database has been used as the storage technology and the CERN EOS for long-term storage. This paper pr...

  18. Heavy Ion Physics with the ATLAS Detector

    CERN Document Server

    Nevski, P

    2006-01-01

    The ATLAS experiment at the LHC plans to study the bulk matter formed in heavy ion collisions, already being studied at RHIC, as well as crucial reference data from p+p and p+A collisions. ATLAS is designed to perform optimally at the nominal machine luminosity of 10^34 cm-2s-1. It has a finely segmented electromagnetic and hadronic calorimeters covering 10 units of rapidity, allowing the study of jets and fragmentation functions in detail in tandem with the inner tracking system. Preliminary studies also indicate that it will be possible to tag b-jets in the heavy ion environment. Upsilon and J/Psi can be reconstructed through the di-muon decay channel. There is also an important "day 1" program planned, that will use the data provided by both p+p and A+A collisions to study bulk features of the collision dynamics. We discuss the current status of simulation studies and plans of the heavy ion physics program with the ATLAS detector during the A+A and p+A runs.

  19. European Wind Atlas and Wind Resource Research in Denmark

    DEFF Research Database (Denmark)

    Mortensen, Niels Gylling

    to estimate the actual wind climate at any specific site and height within this region. The Danish and European Wind Atlases are examples of how the wind atlas methodology can be employed to estimate the wind resource potential for a country or a sub-continent. Recently, the methodology has also been used...... - from wind measurements at prospective sites to wind tunnel simulations and advanced flow modelling. Among these approaches, the wind atlas methodology - developed at Ris0 National Laboratory over the last 25 years - has gained widespread recognition and is presently considered by many as the industry......-standard tool for wind resource assessment and siting of wind turbines. The PC-implementation of the methodology, the Wind Atlas Analysis and Application Program (WAsP), has been applied in more than 70 countries and territories world-wide. The wind atlas methodology is based on physical descriptions and models...

  20. 40 CFR 52.1080 - Photochemical Assessment Monitoring Stations (PAMS) Program.

    Science.gov (United States)

    2010-07-01

    ... Stations (PAMS) Program. 52.1080 Section 52.1080 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... § 52.1080 Photochemical Assessment Monitoring Stations (PAMS) Program. On March 24, 1994 Maryland's... Assessment Monitoring Stations (PAMS) Program as a state implementation plan (SIP) revision, as required by...

  1. 40 CFR 52.2426 - Photochemical Assessment Monitoring Stations (PAMS) Program.

    Science.gov (United States)

    2010-07-01

    ... Stations (PAMS) Program. 52.2426 Section 52.2426 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... § 52.2426 Photochemical Assessment Monitoring Stations (PAMS) Program. On November 23, 1994 Virginia's... Photochemical Assessment Monitoring Stations (PAMS) Program as a state implementation plan (SIP) revision, as...

  2. Cosmic ray runs acquired with ATLAS muon stations

    CERN Multimedia

    Cerutti, F.

    Starting in the fall 2005 several cosmic ray runs have been acquired in the ATLAS pit with six muon stations. These were three large outer and three large middle chambers of the feet sector (sector 13) that have been readout in the ATLAS cavern. In the first data taking period the trigger was based on two large scintillators (~300x30 cm2) positioned in sector 13 just below the large chambers. In this first run the precision chambers (the Monitored Drift Tubes) were operated in a close to final configuration. Typical trigger rates with this setup were of the order of 1 Hz. Several data sets of 10k events were acquired with final electronics up to the muon ROD and analysed with ATHENA-based software. These data allowed the first checks of the functionality and efficiency of the MDT stations in the ATLAS pit and the first measurement of the FE electronics noise in the ATLAS environment. A few event were also collected in a combined run with the TILE barrel calorimeter. An event display of a cosmic ray a...

  3. Digital atlas of fetal brain MRI.

    Science.gov (United States)

    Chapman, Teresa; Matesan, Manuela; Weinberger, Ed; Bulas, Dorothy I

    2010-02-01

    Fetal MRI can be performed in the second and third trimesters. During this time, the fetal brain undergoes profound structural changes. Interpretation of appropriate development might require comparison with normal age-based models. Consultation of a hard-copy atlas is limited by the inability to compare multiple ages simultaneously. To provide images of normal fetal brains from weeks 18 through 37 in a digital format that can be reviewed interactively. This will facilitate recognition of abnormal brain development. T2-W images for the atlas were obtained from fetal MR studies of normal brains scanned for other indications from 2005 to 2007. Images were oriented in standard axial, coronal and sagittal projections, with laterality established by situs. Gestational age was determined by last menstrual period, earliest US measurements and sonogram performed on the same day as the MR. The software program used for viewing the atlas, written in C#, permits linked scrolling and resizing the images. Simultaneous comparison of varying gestational ages is permissible. Fetal brain images across gestational ages 18 to 37 weeks are provided as an interactive digital atlas and are available for free download from http://radiology.seattlechildrens.org/teaching/fetal_brain . Improved interpretation of fetal brain abnormalities can be facilitated by the use of digital atlas cataloging of the normal changes throughout fetal development. Here we provide a description of the atlas and a discussion of normal fetal brain development.

  4. Radiation Monitoring - A Key Element in a Nuclear Power Program

    International Nuclear Information System (INIS)

    Hussein, A.S.; El-dally, T.A.

    2008-01-01

    For a nuclear power plant, radiation is especially of great concern to the public and the environment. Therefore, a radiation monitoring program is becoming a critical importance. This program covers all phases of the nuclear plant including preoperational, normal operation, accident and decommissioning. The fundamental objective of radiation monitoring program is to ensure that the health and safety of public inside and around the plant and to confirm the radiation doses are below the dose limits for workers and the public. This paper summarizes the environmental radiation monitoring program for a nuclear power plant

  5. Experience with ATLAS MySQL PanDA database service

    International Nuclear Information System (INIS)

    Smirnov, Y; Wlodek, T; Hover, J; Smith, J; Wenaus, T; Yu, D; De, K; Ozturk, N

    2010-01-01

    The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.

  6. Experience with ATLAS MySQL PanDA database service

    Energy Technology Data Exchange (ETDEWEB)

    Smirnov, Y; Wlodek, T; Hover, J; Smith, J; Wenaus, T; Yu, D [Physics Department, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States); De, K; Ozturk, N [Department of Physics, University of Texas at Arlington, Arlington, TX, 76019 (United States)

    2010-04-01

    The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.

  7. ATLAS computing operations within the GridKa Cloud

    International Nuclear Information System (INIS)

    Kennedy, J; Walker, R; Olszewski, A; Nderitu, S; Serfon, C; Duckeck, G

    2010-01-01

    The organisation and operations model of the ATLAS T1-T2 federation/Cloud associated to the GridKa T1 in Karlsruhe is described. Attention is paid to Cloud level services and the experience gained during the last years of operation. The ATLAS GridKa Cloud is large and divers spanning 5 countries, 2 ROC's and is currently comprised of 13 core sites. A well defined and tested operations model in such a Cloud is of the utmost importance. We have defined the core Cloud services required by the ATLAS experiment and ensured that they are performed in a managed and sustainable manner. Services such as Distributed Data Management involving data replication,deletion and consistency checks, Monte Carlo Production, software installation and data reprocessing are described in greater detail. In addition to providing these central services we have undertaken several Cloud level stress tests and developed monitoring tools to aid with Cloud diagnostics. Furthermore we have defined good channels of communication between ATLAS, the T1 and the T2's and have pro-active contributions from the T2 manpower. A brief introduction to the GridKa Cloud is provided followed by a more detailed discussion of the operations model and ATLAS services within the Cloud.

  8. Program of telluric lines monitoring

    Directory of Open Access Journals (Sweden)

    Vince I.

    2006-01-01

    Full Text Available A new observational program of telluric lines monitoring was introduced at Belgrade Astronomical Observatory. The ultimate goal of this program is to investigate the properties of Earth’s atmosphere through modeling the observed profiles of telluric lines. The program is intend to observe infrared molecular oxygen lines that were selected according to spectral sensitivity of the available CCD camera. In this paper we give the initial and the final selection criteria for spectral lines included in the program the description of equipment and procedures used for observations and reduction, a review of preliminary observational results with the estimated precision, and a short discussion on the comparison of the theoretical predictions and the measurements.

  9. ATLAS Grid Data Processing: system evolution and scalability

    CERN Document Server

    Golubkov, D; The ATLAS collaboration; Klimentov, A; Minaenko, A; Nevski, P; Vaniachine, A; Walker, R

    2012-01-01

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software & Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users provi...

  10. PREIMS - AT Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...Targeted Proteins Research Program (TPRP). Data file File name: at_atlas_preims.zip File URL: ftp://ftp.biosciencedbc.jp/archiv...base Database Description Download License Update History of This Database Site Policy | Contact Us PREIMS - AT Atlas | LSDB Archive ...

  11. The Savannah River Site's Groundwater Monitoring Program

    International Nuclear Information System (INIS)

    1992-01-01

    This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted during the first quarter of 1992. It includes the analytical data, field data, data review, quality control, and other documentation for this program; provides a record of the program's activities; and serves as an official document of the analytical results

  12. ECOLOGICAL MONITORING AND COMPLIANCE PROGRAM CALENDAR YEAR 2005 REPORT

    Energy Technology Data Exchange (ETDEWEB)

    BECHTEL NEVADA ECOLOGICAL SERVICES

    2006-03-01

    The Ecological Monitoring and Compliance program (EMAC), funded through the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office (NNSA/NSO), monitors the ecosystem of the Nevada Test Site (NTS) and ensures compliance with laws and regulations pertaining to NTS biota. This report summarizes the program’s activities conducted by Bechtel Nevada (BN) during the Calendar Year 2005. Program activities included: (1) biological surveys at proposed construction sites, (2) desert tortoise compliance, (3) ecosystem mapping and data management, (4) sensitive and protected/regulated species and unique habitat monitoring, (5) habitat restoration monitoring, and (6) biological monitoring at the Non-Proliferation Test and Evaluation Complex (NPTEC).

  13. The new ATLAS Fast Calorimeter Simulation

    CERN Document Server

    Jacka, Petr; The ATLAS collaboration

    2018-01-01

    With the huge amount of data collected with ATLAS, there is a need to produce a large number of simulated events. These productions are very CPU and time consuming when using the full GEANT4 simulation. FastCaloSim is a program to quickly simulate the ATLAS calorimeter response, based on a parameterization of the GEANT4 energy deposits of several kinds of particles in a grid of energy and eta. A new version of FastCaloSim is under development and its integration into the ATLAS simulation infrastructure is ongoing. The use of machine learning techniques improves the performance and decreases the memory usage. Dedicated parameterizations for the forward calorimeters are being studied. First results of the new FastCaloSim show substantial improvements of the description of energy and shower shape variables, including the variables for jet substructure.

  14. Environmental Radiological Impact of Nuclear Power. Monitoring and Control Programs

    International Nuclear Information System (INIS)

    Ramos, L. M.

    2000-01-01

    Radioactive contamination of the environment and public exposure to ionizing radiation may result from releases from programmed or accidental operations in regulated activities, or they may be due to preexisting situations such as contamination caused by past accidents, radioactive rain caused by nuclear tests, or increased natural radioactivity resulting from human activities. In many cases, both the emission sources and the environment should be monitored to determine the risk to the population and verify to what extent the limits and conditions established by competent authorities are being observed. Monitoring can be divided into three categories: monitoring of the emission source, of the receiving medium and of members of the public; individual monitoring of the population is extremely rare and would only be considered when estimated doses substantially exceed the annual public dose limit. In practices likely to produce significant radioactive releases, as is the case of nuclear fuel cycle facilities, the limits and conditions for monitoring and controlling them and the requirements for environmental radiological monitoring are established in the licensing process. Programs implemented during normal operation of the facilities form the basis for monitoring in the event of accidents. in addition to environmental radiological monitoring associated with facilities, different countries have monitoring programs outside the facilities zones of influence, in order to ascertain the nationwide radiological fund and determine possible increases in this fund. In Spain, the facilities that generate radioactive waste have effluent storage, treatment and removal systems and radiological monitoring programs based on site and discharge characteristics. The environmental radiological monitoring system is composed of the network implemented by the owners in the nuclear fuel cycle facilities zones of influence, and by nationwide monitoring networks managed by the Consejo de

  15. Oil Sands Regional Aquatics Monitoring Program (RAMP) 5 year report

    International Nuclear Information System (INIS)

    Fawcett, K.

    2003-05-01

    This 5 year report outlined and examined the activities of the Regional Aquatics Monitoring Program (RAMP) from its introduction in 1997 up to 2001. The RAMP is a multi-stakeholder program comprised of industry and government representatives as well as members of aboriginal groups and environmental organizations. The objectives of RAMP are to monitor aquatic environments in the oil sands region in order to allow for assessment of regional trends and cumulative effects, as well as to provide baseline data against which impact predictions of recent environmental impact assessments can be verified. Scientific programs conducted as part of RAMP during the 5-year period included water quality and sediment quality analyses; fish monitoring; benthic communities monitoring; water quality and aquatic vegetation analyses of wetlands; and hydrology and climate monitoring. RAMP's programs have expanded annually in scope as a result of increased oil sands development in the region. This report provided outlines of RAMP's individual program objectives and organizational structures, as well as details of all studies conducted for each year. Data were collected for all major study areas were presented, and program methodologies for assessing and identifying trends were outlined. refs., tabs., figs

  16. ATLAS MDT neutron sensitivity measurement and modeling

    International Nuclear Information System (INIS)

    Ahlen, S.; Hu, G.; Osborne, D.; Schulz, A.; Shank, J.; Xu, Q.; Zhou, B.

    2003-01-01

    The sensitivity of the ATLAS precision muon detector element, the Monitored Drift Tube (MDT), to fast neutrons has been measured using a 5.5 MeV Van de Graaff accelerator. The major mechanism of neutron-induced signals in the drift tubes is the elastic collisions between the neutrons and the gas nuclei. The recoil nuclei lose kinetic energy in the gas and produce the signals. By measuring the ATLAS drift tube neutron-induced signal rate and the total neutron flux, the MDT neutron signal sensitivities were determined for different drift gas mixtures and for different neutron beam energies. We also developed a sophisticated simulation model to calculate the neutron-induced signal rate and signal spectrum for ATLAS MDT operation configurations. The calculations agree with the measurements very well. This model can be used to calculate the neutron sensitivities for different gaseous detectors and for neutron energies above those available to this experiment

  17. Study of the ATLAS MDT spectrometer using high energy CERN combined test beam data

    NARCIS (Netherlands)

    Adorisio, C.; et al., [Unknown; Barisonzi, M.; Bobbink, G.; Boterenbrood, H.; Brouwer, G.; Groenstege, H.; Hart, R.; Konig, A.; Linde, F.; van der Graaf, H.; Vermeulen, J.; Vreeswijk, M.; Werneke, P.

    2009-01-01

    In 2004, a combined system test was performed in the H8 beam line at the CERN SPS with a setup reproducing the geometry of sectors of the ATLAS Muon Spectrometer, formed by three stations of Monitored Drift Tubes (MDT). The full ATLAS analysis chain was used to obtain the results presented in this

  18. MPX detectors as LHC luminosity monitor

    Energy Technology Data Exchange (ETDEWEB)

    Sopczak, Andre; Ali, Babar; Bergmann, Benedikt; Caforio, Davide; Heijne, Erik; Pospisil, Stanislav; Seifert, Frank; Solc, Jaroslav; Suk, Michal; Turecek, Daniel [IEAP CTU in Prague (Czech Republic); Ashba, Nedaa; Leroy, Claude; Soueid, Paul [University of Montreal (Canada); Bekhouche, Khaled [Biskra University (Algeria); Campbell, Michael; Nessi, Marzio [CERN (Switzerland); Lipniacka, Anna [Bergen University (Norway)

    2016-07-01

    A network of 16 Medipix-2 (MPX) silicon pixel devices was installed in the ATLAS detector cavern at CERN. It was designed to measure the composition and spectral characteristics of the radiation field in the ATLAS experiment and its surroundings. This study demonstrates that the MPX network can also be used as a self-sufficient luminosity monitoring system. The MPX detectors collect data independently of the ATLAS data-recording chain, and thus they provide independent measurements of the bunch-integrated ATLAS/LHC luminosity. In particular, the MPX detectors located close enough to the primary interaction point are used to perform van der Meer calibration scans with high precision. Results from the luminosity monitoring are presented for 2012 data taken at √(s) =8 TeV proton-proton collisions. The characteristics of the LHC luminosity reduction rate are studied and the effects of beam-beam (burn-off) and beam-gas (single bunch) interactions are evaluated. The systematic variations observed in the MPX luminosity measurements are below 0.3% for one minute intervals.

  19. The Argonne Radiological Impact Program (ARIP). Part II. MONITOR: A Program and Data Base for Retrieval and Utilization of Pollutant Monitoring Data

    Energy Technology Data Exchange (ETDEWEB)

    Eckerman, Keith F.; Stowe, Ralph F.; Frigerio, Norman A.

    1977-02-01

    The Argonne Radiological Impact Program (ARIP) is an ongoing project of the Laboratory's Division of Environmental Impact Studies that aims at developing methodologies for assessing the carcinogenic hazards associated with nuclear power development. The project's first report (ANL/ES-26, Part I), published in September.l973, discussed models of radiation carcinogenesis and the contribution of U .. S. background radiation levels to hazardous dose rates. The current report (Part II) treats the storage and access of available data on radiation and radioactivity levels in the u. S. A compute-r code. (the MONITOR program) is prf!sented, which can serve as a ready-access data. bank for all monitoring data acquired over the past two decades. The MONITOR program currently stores data on monitoring locations, types of monitoring efforts, and types of monitoring data. reported in Radiation Data and Reports by the various state and federal ne-tworks; expansion of this data base to include nuclear power facilities in operation or on order is ongoing ·. The MONITOR code retrieves information within a search radius, or rectangl.e ,. circumscribed by parameters of latitude and longitude, and l:.ists or maps the data_as: requested. The code, with examples, is given in full in the report ..

  20. APCAL1: Beam Position Monitor Program

    Energy Technology Data Exchange (ETDEWEB)

    Early, R.A.

    1979-12-01

    APCAL1 is an applications program operational on the PEP MODCOMP IV computer for the purpose of converting beam position monitor (BPM) button voltage readings to x,y coordinates. Calibration information and the BPM readings are read from the MODCOMP IV data base. Corresponding x,y coordinates are written in the data base for use by other programs. APCAL1 is normally activated by another program but can be activated by a touch panel for checkout purposes.

  1. The Savannah River Site's groundwater monitoring program

    International Nuclear Information System (INIS)

    1991-01-01

    This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted by EPD/EMS in the first quarter of 1991. In includes the analytical data, field data, data review, quality control, and other documentation for this program, provides a record of the program's activities and rationale, and serves as an official document of the analytical results

  2. Ecological Monitoring and Compliance Program Fiscal/Calendar Year 2004 Report

    Energy Technology Data Exchange (ETDEWEB)

    Bechtel Nevada

    2005-03-01

    The Ecological Monitoring and Compliance program, funded through the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office, monitors the ecosystem of the Nevada Test Site and ensures compliance with laws and regulations pertaining to Nevada Test Site biota. This report summarizes the program's activities conducted by Bechtel Nevada during the Fiscal Year 2004 and the additional months of October, November, and December 2004, reflecting a change in the monitoring period to a calendar year rather than a fiscal year as reported in the past. This change in the monitoring period was made to better accommodate information required for the Nevada Test Site Environmental Report, which reports on a calendar year rather than a fiscal year. Program activities included: (1) biological surveys at proposed construction sites, (2) desert tortoise compliance, (3) ecosystem mapping and data management, (4) sensitive species and unique habitat monitoring, (5) habitat restoration monitoring, and (6) biological monitoring at the Hazardous Materials Spill Center.

  3. A plan for the North American Bat Monitoring Program (NABat)

    Science.gov (United States)

    Loeb, Susan C.; Rodhouse, Thomas J.; Ellison, Laura E.; Lausen, Cori L.; Reichard, Jonathan D.; Irvine, Kathryn M.; Ingersoll, Thomas E.; Coleman, Jeremy; Thogmartin, Wayne E.; Sauer, John R.; Francis, Charles M.; Bayless, Mylea L.; Stanley, Thomas R.; Johnson, Douglas H.

    2015-01-01

    The purpose of the North American Bat Monitoring Program (NABat) is to create a continent-wide program to monitor bats at local to rangewide scales that will provide reliable data to promote effective conservation decisionmaking and the long-term viability of bat populations across the continent. This is an international, multiagency program. Four approaches will be used to gather monitoring data to assess changes in bat distributions and abundances: winter hibernaculum counts, maternity colony counts, mobile acoustic surveys along road transects, and acoustic surveys at stationary points. These monitoring approaches are described along with methods for identifying species recorded by acoustic detectors. Other chapters describe the sampling design, the database management system (Bat Population Database), and statistical approaches that can be used to analyze data collected through this program.

  4. Technical Basis Document for PFP Area Monitoring Dosimetry Program

    CERN Document Server

    Cooper, J R

    2000-01-01

    This document describes the phantom dosimetry used for the PFP Area Monitoring program and establishes the basis for the Plutonium Finishing Plant's (PFP) area monitoring dosimetry program in accordance with the following requirements: Title 10, Code of Federal Regulations (CFR), part 835, ''Occupational Radiation Protection'' Part 835.403; Hanford Site Radiological Control Manual (HSRCM-1), Part 514; HNF-PRO-382, Area Dosimetry Program; and PNL-MA-842, Hanford External Dosimetry Technical Basis Manual.

  5. Technical Basis Document for PFP Area Monitoring Dosimetry Program

    International Nuclear Information System (INIS)

    COOPER, J.R.

    2000-01-01

    This document describes the phantom dosimetry used for the PFP Area Monitoring program and establishes the basis for the Plutonium Finishing Plant's (PFP) area monitoring dosimetry program in accordance with the following requirements: Title 10, Code of Federal Regulations (CFR), part 835, ''Occupational Radiation Protection'' Part 835.403; Hanford Site Radiological Control Manual (HSRCM-1), Part 514; HNF-PRO-382, Area Dosimetry Program; and PNL-MA-842, Hanford External Dosimetry Technical Basis Manual

  6. ATLAS Potential for Beauty Physics Measurements

    International Nuclear Information System (INIS)

    Smizanska, M.

    2001-01-01

    The main focus of ATLAS b physics has traditionally been on the standard model. In the last few years also the aspects of new physics in B-decays has been addressed. Another new field of studies started recently is a beauty production. We give an overview of the older as well as more recent results. After an introduction outlining selected trigger and detector performance characteristics, we explain methods and goals of CP violation measurements in decay channels of B d 0 meson, physics of B s 0 system and of rare decays. Finally, the ATLAS program for beauty production measurements is presented. (author)

  7. A high-precision X-ray tomograph for quality control of the ATLAS Muon Monitored Drift Tube Chambers

    CERN Document Server

    Schuh, S; Banhidi, Z; Fabjan, Christian Wolfgang; Lampl, W; Marchesotti, M; Rangod, Stephane; Sbrissa, E; Smirnov, Y; Voss, Rüdiger; Woudstra, M; Zhuravlov, V

    2004-01-01

    A dedicated X-ray tomograph has been developed at CERN to control the required wire placement accuracy of better than 20mum of the 1200 Monitored Drift Tube Chambers which make up most of the precision chamber part of the ATLAS Muon Spectrometer. The tomograph allows the chamber wire positions to be measured with a 2mum statistical and 2mum systematic uncertainty over the full chamber cross-section of 2.2 multiplied by 0.6m**2. Consistent chamber production quality over the 4-year construction phase is ensured with a similar to 15% sampling rate. Measurements of about 70 of the 650 MDT chambers so far produced have been essential in assessing the validity and consistency of the various construction procedures.

  8. The ATLAS multi-user upgrade and potential applications

    Energy Technology Data Exchange (ETDEWEB)

    Mustapha, B.; Nolen, J. A.; Savard, G.; Ostroumov, P. N.

    2017-12-01

    With the recent integration of the CARIBU-EBIS charge breeder into the ATLAS accelerator system to provide for more pure and efficient charge breeding of radioactive beams, a multi-user upgrade of the ATLAS facility is being proposed to serve multiple users simultaneously. ATLAS was the first superconducting ion linac in the world and is the US DOE low-energy Nuclear Physics National User Facility. The proposed upgrade will take advantage of the continuous-wave nature of ATLAS and the pulsed nature of the EBIS charge breeder in order to simultaneously accelerate two beams with very close mass-to-charge ratios; one stable from the existing ECR ion source and one radioactive from the newly commissioned EBIS charge breeder. In addition to enhancing the nuclear physics program, beam extraction at different points along the linac will open up the opportunity for other potential applications; for instance, material irradiation studies at ~ 1 MeV/u and isotope production at ~ 6 MeV/u or at the full ATLAS energy of ~ 15 MeV/u. The concept and proposed implementation of the ATLAS multi-user upgrade will be presented. Future plans to enhance the flexibility of this upgrade will also be presented.

  9. Recent Improvements in the ATLAS PanDA Pilot

    International Nuclear Information System (INIS)

    Nilsson, P; De, K; Bejar, J Caballero; Maeno, T; Potekhin, M; Wenaus, T; Compostella, G; Contreras, C; Dos Santos, T

    2012-01-01

    The Production and Distributed Analysis system (PanDA) in the ATLAS experiment uses pilots to execute submitted jobs on the worker nodes. The pilots are designed to deal with different runtime conditions and failure scenarios, and support many storage systems. This talk will give a brief overview of the PanDA pilot system and will present major features and recent improvements including CernVM File System integration, the job retry mechanism, advanced job monitoring including JEM technology, and validation of new pilot code using the HammerCloud stress-testing system. PanDA is used for all ATLAS distributed production and is the primary system for distributed analysis. It is currently used at over 130 sites worldwide. We analyze the performance of the pilot system in processing LHC data on the OSG, EGI and Nordugrid infrastructures used by ATLAS, and describe plans for its further evolution.

  10. Lake Roosevelt Fisheries Monitoring Program; 1988-1989 Annual Report.

    Energy Technology Data Exchange (ETDEWEB)

    Peone, Tim L.; Scholz, Allan T.; Griffith, James R.

    1990-10-01

    In the Northwest Power Planning Council's 1987 Columbia River Basin Fish and Wildlife Program (NPPC 1987), the Council directed the Bonneville Power Administration (BPA) to construct two kokanee salmon (Oncorhynchus nerka) hatcheries as partial mitigation for the loss of anadromous salmon and steelhead incurred by construction of Grand Coulee Dam [Section 903 (g)(l)(C)]. The hatcheries will produce kokanee salmon for outplanting into Lake Roosevelt as well as rainbow trout (Oncorhynchus mykiss) for the Lake Roosevelt net-pen program. In section 903 (g)(l)(E), the Council also directed BPA to fund a monitoring program to evaluate the effectiveness of the kokanee hatcheries. The monitoring program included the following components: (1) a year-round, reservoir-wide, creel survey to determine angler use, catch rates and composition, and growth and condition of fish; (2) assessment of kokanee, rainbow, and walleye (Stizostedion vitreum) feeding habits and densities of their preferred prey, and; (3) a mark and recapture study designed to assess the effectiveness of different locations where hatchery-raised kokanee and net pen reared rainbow trout are released. The above measures were adopted by the Council based on a management plan, developed by the Upper Columbia United Tribes Fisheries Center, Spokane Indian Tribe, Colville Confederated Tribes, Washington Department of Wildlife, and National Park Service, that examined the feasibility of restoring and enhancing Lake Roosevelt fisheries (Scholz et al. 1986). In July 1988, BPA entered into a contract with the Spokane Indian Tribe to initiate the monitoring program. The projected duration of the monitoring program is through 1995. This report contains the results of the monitoring program from August 1988 to December 1989.

  11. Object oriented software development in the atlas collaboration

    International Nuclear Information System (INIS)

    Schaffer, A.

    1994-01-01

    For more than a year a group within the Atlas Collaboration has been investigating the possibilities of the application of object oriented methodology and program development to the software of Atlas. Recently this group has been joined by members of the CMS Collaboration in the submission of a proposal to the DRDC at CERN to find a common solution for the software development environment for LHC. This talk will discuss the progress achieved so far and the future perspective

  12. Plant performance monitoring program at Krsko NPP

    International Nuclear Information System (INIS)

    Bach, B.; Kavsek, D.

    2004-01-01

    A high level of nuclear safety and plant reliability results from the complex interaction of a good design, operational safety and human performance. This is the reason for establishing a set of operational plant safety performance indicators, to enable monitoring of both plant performance and progress. Performance indicators are also used for setting challenging targets and goals for improvement, to gain additional perspective on performance relative to other plants and to provide an indication of a potential need to adjust priorities and resources to achieve improved overall plant performance. A specific indicator trend over a certain period can provide an early warning to plant management to evaluate the causes behind the observed changes. In addition to monitoring the changes and trends, it is also necessary to compare the indicators with identified targets and goals to evaluate performance strengths and weaknesses. Plant Performance Monitoring Program at Krsko NPP defines and ensures consistent collection, processing, analysis and use of predefined relevant plant operational data, providing a quantitative indication of nuclear power plant performance. When the program was developed, the conceptual framework described in IAEA TECDOC-1141 Operational Safety Performance Indicators for Nuclear Power Plants was used as its basis in order to secure that a reasonable set of quantitative indications of operational safety performance would be established. Safe, conservative, cautious and reliable operation of the Krsko NPP is a common goal for all plant personnel. It is provided by continuous assurance of both health and safety of the public and employees according to the plant policy stated in program MD-1 Notranje usmeritve in cilji NEK, which is the top plant program. Establishing a program of monitoring and assessing operational plant safety performance indicators represents effective safety culture of plant personnel.(author)

  13. Overview of the ATLAS distributed computing system

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration

    2018-01-01

    The CERN ATLAS experiment successfully uses a worldwide computing infrastructure to support the physics program during LHC Run 2. The grid workflow system PanDA routinely manages 250 to 500 thousand concurrently running production and analysis jobs to process simulation and detector data. In total more than 300 PB of data is distributed over more than 150 sites in the WLCG and handled by the ATLAS data management system Rucio. To prepare for the ever growing LHC luminosity in future runs new developments are underway to even more efficiently use opportunistic resources such as HPCs and utilize new technologies. This presentation will review and explain the outline and the performance of the ATLAS distributed computing system and give an outlook to new workflow and data management ideas for the beginning of the LHC Run 3.

  14. The Offshore New European Wind Atlas

    Science.gov (United States)

    Karagali, I.; Hahmann, A. N.; Badger, M.; Hasager, C.; Mann, J.

    2017-12-01

    The New European Wind Atlas (NEWA) is a joint effort of research agencies from eight European countries, co-funded under the ERANET Plus Program. The project is structured around two areas of work: development of dynamical downscaling methodologies and measurement campaigns to validate these methodologies, leading to the creation and publication of a European wind atlas in electronic form. This atlas will contain an offshore component extending 100 km from the European coasts. To achieve this, mesoscale models along with various observational datasets are utilised. Scanning lidars located at the coastline were used to compare the coastal wind gradient reproduced by the meso-scale model. Currently, an experimental campaign is occurring in the Baltic Sea, with a lidar located in a commercial ship sailing from Germany to Lithuania, thus covering the entire span of the south Baltic basin. In addition, satellite wind retrievals from scatterometers and Synthetic Aperture Radar (SAR) instruments were used to generate mean wind field maps and validate offshore modelled wind fields and identify the optimal model set-up parameters.The aim of this study is to compare the initial outputs from the offshore wind atlas produced by the Weather & Research Forecasting (WRF) model, still in pre-operational phase, and the METOP-A/B Advanced Scatterometer (ASCAT) wind fields, reprocessed to stress equivalent winds at 10m. Different experiments were set-up to evaluate the model sensitivity for the various domains covered by the NEWA offshore atlas. ASCAT winds were utilised to assess the performance of the WRF offshore atlases. In addition, ASCAT winds were used to create an offshore atlas covering the years 2007 to 2016, capturing the signature of various spatial wind features, such as channelling and lee effects from complex coastal topographical elements.

  15. The Savannah River Site`s Groundwater Monitoring Program. Fourth quarter, 1990

    Energy Technology Data Exchange (ETDEWEB)

    1991-06-18

    This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted in the fourth quarter of 1990. It includes the analytical data, field data, well activity data, and other documentation for this program, provides a record of the program`s activities and rationale, and serves as an official document of the analytical results. The groundwater monitoring program includes the following activities: installation, maintenance, and abandonment of monitoring wells, environmental soil borings, development of the sampling and analytical schedule, collection and analyses of groundwater samples, review of analytical and other data, maintenance of the databases containing groundwater monitoring data, quality assurance (QA) evaluations of laboratory performance, and reports of results to waste-site facility custodians and to the Environmental Protection Section (EPS) of EPD.

  16. Performance of the ATLAS muon trigger in run 2

    CERN Document Server

    Morgenstern, Marcus; The ATLAS collaboration

    2017-01-01

    Triggering on muons is a crucial ingredient to fulfill the physics program of the ATLAS experiments. The ATLAS trigger system deploys a two stage strategy, a hardware-based Level-1 trigger and a software-based high-level trigger to select events of interest at a suitable recording rate. Both stages underwent upgrades to cope with the challenges in run-II data-taking at centre-of-mass energies of 13 TeV and instantaneous luminosities up to 2x10$^{34} cm^{-2}s^{-1}$. The design of the ATLAS muon triggers and their performance in proton-proton collisions at 13 TeV are presented.

  17. Technical Basis Document for PFP Area Monitoring Dosimetry Program

    Energy Technology Data Exchange (ETDEWEB)

    COOPER, J.R.

    2000-04-17

    This document describes the phantom dosimetry used for the PFP Area Monitoring program and establishes the basis for the Plutonium Finishing Plant's (PFP) area monitoring dosimetry program in accordance with the following requirements: Title 10, Code of Federal Regulations (CFR), part 835, ''Occupational Radiation Protection'' Part 835.403; Hanford Site Radiological Control Manual (HSRCM-1), Part 514; HNF-PRO-382, Area Dosimetry Program; and PNL-MA-842, Hanford External Dosimetry Technical Basis Manual.

  18. Sandia National Laboratories, California Environmental Monitoring Program annual report for 2011.

    Energy Technology Data Exchange (ETDEWEB)

    Holland, Robert C.

    2011-03-01

    The annual program report provides detailed information about all aspects of the SNL/California Environmental Monitoring Program. It functions as supporting documentation to the SNL/California Environmental Management System Program Manual. The 2010 program report describes the activities undertaken during the previous year, and activities planned in future years to implement the Environmental Monitoring Program, one of six programs that supports environmental management at SNL/California.

  19. Jet calibration in the ATLAS experiment at LHC

    CERN Document Server

    The ATLAS collaboration

    2009-01-01

    Jets produced in the hadronisation of quarks and gluons play a central role in the rich physics program that will be covered by the ATLAS experiment at the LHC, and are central elements of the signature for many physics channels. A well understood energy scale, which for some process demands an uncertainty in the energy scale of order 1%, is a prerequisite. Moreover, in early data we face the challenge of dealing with the unexpected issues of a brand new detector in an unexplored energy domain. The ATLAS collaboration is carrying out a program to revisit the jet calibration strategies used in earlier hadron-collider experiments and develop a strategy which takes account of the new experimental problems and demand for greater measurement precision which will be faced at the LHC. The ATLAS calorimeter is intrinsically non-compensating and we will present the use of different offline approaches based on cell energy density and jet topology to correct for this effect on jet energy resolution and scale. In additio...

  20. Overview of four prescription monitoring/review programs in Canada.

    Science.gov (United States)

    Furlan, Andrea D; MacDougall, Peter; Pellerin, Denise; Shaw, Karen; Spitzig, Doug; Wilson, Galt; Wright, Janet

    2014-01-01

    Prescription monitoring or review programs collect information about prescription and dispensing of controlled substances for the purposes of monitoring, analysis and education. In Canada, it is the responsibility of the provincial institutions to organize, maintain and run such programs. To describe the characteristics of four provincial programs that have been in place for >6 years. The managers of the prescription monitoring⁄review programs of four provinces (British Columbia, Alberta, Saskatchewan and Nova Scotia) were invited to present at a symposium at the Canadian Pain Society in May 2012. In preparation for the symposium, one author collected and summarized the information. Three provinces have a mix of review and monitoring programs; the program in British Columbia is purely for review and education. All programs include controlled substances (narcotics, barbiturates and psychostimulants); however, other substances are differentially included among the programs: anabolic steroids are included in Saskatchewan and Nova Scotia; and cannabinoids are included in British Columbia and Nova Scotia. Access to the database is available to pharmacists in all provinces. Physicians need consent from patients in British Columbia, and only professionals registered with the program can access the database in Alberta. The definition of inappropriate prescribing and dispensing is not uniform. Double doctoring, double pharmacy and high-volume dispensing are considered to be red flags in all programs. There is variability among Canadian provinces in managing prescription monitoring⁄review programs.

  1. Overview of recent results from the ATLAS experiment

    CERN Document Server

    Grabowska-Bold, Iwona; The ATLAS collaboration

    2017-01-01

    The heavy-ion program in the ATLAS experiment at the LHC originated as an extensive program to probe and characterize the hot, dense matter created in relativistic lead-lead collisions. In recent years, the program has also broadened to a detailed study of collective behavior in smaller systems. In particular, the techniques used to study larger systems are also applied to proton-proton and proton-lead collisions over a wide range of particle multiplicities, to try and understand the early-time dynamics which lead to similar flow-like features in all of the systems. Another recent development is a program studying ultra-peripheral collisions, which provide gamma-gamma and photonuclear processes over a wide range of CM energy, to probe the nuclear wavefunction. This talk presents the most recent results from the ATLAS experiment based on Run 1 and Run 2 data, including measurements of collectivity over a wide range of collision systems, potential nPDF modifications — using electroweak bosons, inclusive jets,...

  2. Monitoring multiple species: Estimating state variables and exploring the efficacy of a monitoring program

    Science.gov (United States)

    Mattfeldt, S.D.; Bailey, L.L.; Grant, E.H.C.

    2009-01-01

    Monitoring programs have the potential to identify population declines and differentiate among the possible cause(s) of these declines. Recent criticisms regarding the design of monitoring programs have highlighted a failure to clearly state objectives and to address detectability and spatial sampling issues. Here, we incorporate these criticisms to design an efficient monitoring program whose goals are to determine environmental factors which influence the current distribution and measure change in distributions over time for a suite of amphibians. In designing the study we (1) specified a priori factors that may relate to occupancy, extinction, and colonization probabilities and (2) used the data collected (incorporating detectability) to address our scientific questions and adjust our sampling protocols. Our results highlight the role of wetland hydroperiod and other local covariates in the probability of amphibian occupancy. There was a change in overall occupancy probabilities for most species over the first three years of monitoring. Most colonization and extinction estimates were constant over time (years) and space (among wetlands), with one notable exception: local extinction probabilities for Rana clamitans were lower for wetlands with longer hydroperiods. We used information from the target system to generate scenarios of population change and gauge the ability of the current sampling to meet monitoring goals. Our results highlight the limitations of the current sampling design, emphasizing the need for long-term efforts, with periodic re-evaluation of the program in a framework that can inform management decisions.

  3. Forward Detectors in ATLAS: ALFA, ZDC and LUCID

    CERN Document Server

    Fabbri, L; The ATLAS collaboration

    2009-01-01

    In order to determine the experimental cross sections for the observed physics processes, an estimation of the absolute luminosity is needed. In fact a careful study of “well known” processes will be one of the first steps of the LHC experiments as it can provide possible signatures of new physics which consist in deviations with respect to the Standard Model (SM) predictions. The methodologies for luminosity monitoring and total cross section estimation at the LHC will be reviewed in this talk along with the dedicated detectors of the ATLAS experiment. ATLAS will make extensive usage of the detectors in the forward region each one with a different task: LUCID (LUminosity measurement using Cherenkov Integrating Detector) is a system of 40 (2 x 20) Cherenkov tubes, surrounding the beam pipe at about 17 m from the interaction region. It will be able to monitor the collision-by-collision luminosity by detecting and counting the number of charged particles coming from the impact point. ALFA (Absolute Luminosi...

  4. Ecological Monitoring and Compliance Program Fiscal Year 2003 Report

    Energy Technology Data Exchange (ETDEWEB)

    Bechtel Nevada

    2003-12-01

    The Ecological Monitoring and Compliance program, funded through the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office, monitors the ecosystem of the Nevada Test Site and ensures compliance with laws and regulations pertaining to Nevada Test Site biota. This report summarizes the program's activities conducted by Bechtel Nevada during fiscal year 2003.

  5. Active Sites Environmental Monitoring Program. FY 1993: Annual report

    International Nuclear Information System (INIS)

    Morrissey, C.M.; Ashwood, T.L.; Hicks, D.S.; Marsh, J.D.

    1994-08-01

    This report continues a series of annual and semiannual reports that present the results of the Active Sites Environmental Monitoring Program (ASEMP) monitoring activities. The report details monitoring data for fiscal year (FY) 1993 and is divided into three major areas: SWSA 6 [including tumulus pads, Interim Waste Management Facility (IWMF), and other sites], the low-level Liquid-Waste Solidification Project (LWSP), and TRU-waste storage facilities in SWSA 5 N. The detailed monitoring methodology is described in the second revision of the ASEMP program plan. This report also presents a summary of the methodology used to gather data for each major area along with the results obtained during FY 1993

  6. Monitoring activities review of the Radiological Environmental Surveillance Program

    International Nuclear Information System (INIS)

    Ritter, P.D.

    1992-03-01

    The 1992 Monitoring Activities Review (MAR) is directed at the Radiological Environment Surveillance Program (RESP) activities at the Radioactive Waste Management Complex (RWMC) of Idaho Engineering Laboratory (INEL). MAR panelists studied RESP documents and discussed their concerns with Environmental Monitoring Unit (EMU) staff and other panel members. These concerns were subsequently consolidated into a collection of recommendations with supporting discussions. Recommendations focus on specific monitoring activities, as well as the overall program. The MAR report also contains pertinent comments that should not require further action

  7. Operational status of the uranium beam upgrade of the ATLAS accelerator

    International Nuclear Information System (INIS)

    Pardo, R.C.; Bollinger, L.M.; Nolen, J.A.

    1993-01-01

    The Positive-Ion Injector (PII) for ATLAS is complete. First beams from the new injector have been accelerated and used for experiments at ATLAS. The PH consists of an ECR ion source on a 350-kV platform and a low-velocity superconducting linac. The first acceleration of uranium for the experimental program has demonstrated the design goals of the project have been met. Since the summer of 1992, the new injecter has been used for the research program approximately 50% of the time. Longitudinal beam quality from the new injector has been measured to be significantly better than comparable beams from the tandem injecter. Changes to the mix of resonators in the main ATLAS accelerator to match better the velocity profile for heavy beams such as uranium are nearly complete and uranium energies up to 6.45 MeV per nucleon have been achieved. The operating experience of the new ATLAS facility will be discussed with emphasis on the measured beam quality as well as achieved beam energies and currents

  8. An IMPI-compliant control system for the ATLAS TileCal Phase II Upgrade PreProcessor module

    CERN Document Server

    Zuccarello, Pedro Diego; The ATLAS collaboration

    2016-01-01

    TileCal is the Tile hadronic calorimeter of the ATLAS experiment at the LHC. The LHC upgrade program, currently under development, will culminate in the High Luminosity LHC (HL-LHC), which is expected to increase about five times the LHC nominal instantaneous luminosity. The readout electronics of the Tile calorimenter being redesigned introducing a new read-out strategy in order to accommodate the detector to the new HL-LHC parameters. The data generated inside the detector at every bunch crossing will be transmitted to the PreProcessor (PPR) boards before any event selection is applied. The PPRs will be located at off-detector sites. The PPR will be responsible of providing preprocessed trigger information to the ATLAS first level of trigger (L1). In overall it will represent the interface between the data acquisition, trigger and control systems and the on-detector electronics. The PPR, being an important part of the readout system, needs to be remotely accessed and monitored to prevent failures or, in cas...

  9. Control and Data Acquisition System of the ATLAS Facility

    International Nuclear Information System (INIS)

    Choi, Ki-Yong; Kwon, Tae-Soon; Cho, Seok; Park, Hyun-Sik; Baek, Won-Pil; Kim, Jung-Taek

    2007-02-01

    This report describes the control and data acquisition system of an integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation) facility, which recently has been constructed at KAERI (Korea Atomic Energy Research Institute). The control and data acquisition system of the ATLAS is established with the hybrid distributed control system (DCS) by RTP corp. The ARIDES system on a LINUX platform which is provided by BNF Technology Inc. is used for a control software. The IO signals consists of 1995 channels and they are processed at 10Hz. The Human-Machine-Interface (HMI) consists of 43 processing windows and they are classified according to fluid system. All control devices can be controlled by manual, auto, sequence, group, and table control methods. The monitoring system can display the real time trend or historical data of the selected IO signals on LCD monitors in a graphical form. The data logging system can be started or stopped by operator and the logging frequency can be selected among 0.5, 1, 2, 10Hz. The fluid system of the ATLAS facility consists of several systems including a primary system to auxiliary system. Each fluid system has a control similarity to the prototype plant, APR1400/OPR1000

  10. Monitoring and evaluation of green public procurement programs

    Energy Technology Data Exchange (ETDEWEB)

    Adell, Aure [Ecoinstitut, Barcelona (Spain); Schaefer, Bettina [Ecoinstitut, Barcelona (Spain); Ravi, Kavita [US Department of Energy, Washington, DC (United States); Corry, Jenny [Collaborative Labeling and Appliance Standards Program (United States)

    2013-10-15

    Effective procurement policies can help governments save considerable amounts of money while also reducing energy consumption. Additionally, private sector companies which purchase large numbers of energy-consuming devices can benefit from procurement policies that minimize life-cycle energy costs. Both public and private procurement programs offer opportunities to generate market-transforming demand for energy efficient appliances and lighting fixtures. In recent years, several governments have implemented policies to procure energy efficient products and services. When deploying these policies, efforts have focused on developing resources for implementation (guidelines, energy efficiency specifications for tenders, life cycle costing tools, training, etc.) rather than defining monitoring systems to track progress against the set objectives. Implementation resources are necessary to make effective policies; however, developing Monitoring and Evaluation (M and E) mechanisms are critical to ensure that the policies are effective. The purpose of this article is to provide policy makers and procurement officials with a preliminary map of existing approaches and key components to monitor Energy Efficient Procurement (EEP) programs in order to contribute to the improvement of their own systems. Case studies are used throughout the paper to illustrate promising approaches to improve the M and E of EEP programs, from the definition of the system or data collection to complementary instruments to improve both the monitoring response and program results.

  11. 14 CFR 152.319 - Monitoring and reporting of program performance.

    Science.gov (United States)

    2010-01-01

    ... performance. 152.319 Section 152.319 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRPORTS AIRPORT AID PROGRAM Accounting and Reporting Requirements § 152.319 Monitoring and reporting of program performance. (a) The sponsor or planning agency shall monitor performance...

  12. Anti-Atlas Mountains, Morocco

    Science.gov (United States)

    2003-01-01

    The Anti-Atlas Mountains of Morocco formed as a result of the collision of the African and Eurasian tectonic plates about 80 million years ago. This collision destroyed the Tethys Ocean; the limestone, sandstone, claystone, and gypsum layers that formed the ocean bed were folded and crumpled to create the Atlas and Anti-Atlas Mountains. In this ASTER image, short wavelength infrared bands are combined to dramatically highlight the different rock types, and illustrate the complex folding. The yellowish, orange and green areas are limestones, sandstones and gypsum; the dark blue and green areas are underlying granitic rocks. The ability to map geology using ASTER data is enhanced by the multiple short wavelength infrared bands, that are sensitive to differences in rock mineralogy. This image was acquired on June 13, 2001 by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA's Terra satellite. With its 14 spectral bands from the visible to the thermal infrared wavelength region, and its high spatial resolution of 15 to 90 meters (about 50 to 300 feet), ASTER images Earth to map and monitor the changing surface of our planet.ASTER is one of five Earth-observing instruments launched December 18, 1999, on NASA's Terra satellite. The instrument was built by Japan's Ministry of Economy, Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and the data products.The broad spectral coverage and high spectral resolution of ASTER will provide scientists in numerous disciplines with critical information for surface mapping, and monitoring of dynamic conditions and temporal change. Example applications are: monitoring glacial advances and retreats; monitoring potentially active volcanoes; identifying crop stress; determining cloud morphology and physical properties; wetlands evaluation; thermal pollution monitoring; coral reef degradation; surface temperature mapping of soils and

  13. A synthesis of evaluation monitoring projects by the forest health monitoring program (1998-2007)

    Science.gov (United States)

    William A. Bechtold; Michael J. Bohne; Barbara L. Conkling; Dana L. Friedman

    2012-01-01

    The national Forest Health Monitoring Program of the Forest Service, U.S. Department of Agriculture, has funded over 200 Evaluation Monitoring projects. Evaluation Monitoring is designed to verify and define the extent of deterioration in forest ecosystems where potential problems have been identified. This report is a synthesis of results from over 150 Evaluation...

  14. Streamlined Calibration of the ATLAS Muon Spectrometer Precision Chambers

    CERN Document Server

    Levin, DS; The ATLAS collaboration; Dai, T; Diehl, EB; Ferretti, C; Hindes, JM; Zhou, B

    2009-01-01

    The ATLAS Muon Spectrometer is comprised of nearly 1200 optically Monitored Drifttube Chambers (MDTs) containing 354,000 aluminum drift tubes. The chambers are configured in barrel and endcap regions. The momentum resolution required for the LHC physics reach (dp/p = 3% and 10% at 100 GeV and 1 TeV) demands rigorous MDT drift tube calibration with frequent updates. These calibrations (RT functions) convert the measured drift times to drift radii and are a critical component to the spectrometer performance. They are sensitive to the MDT gas composition: Ar 93%, CO2 7% at 3 bar, flowing through the detector at arate of 100,000 l hr−1. We report on the generation and application of Universal RT calibrations derived from an inline gas system monitor chamber. Results from ATLAS cosmic ray commissioning data are included. These Universal RTs are intended for muon track reconstuction in LHC startup phase.

  15. Advances in ATLAS@Home towards a major ATLAS computing resource

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2018-01-01

    The volunteer computing project ATLAS@Home has been providing a stable computing resource for the ATLAS experiment since 2013. It has recently undergone some significant developments and as a result has become one of the largest resources contributing to ATLAS computing, by expanding its scope beyond traditional volunteers and into exploitation of idle computing power in ATLAS data centres. Removing the need for virtualization on Linux and instead using container technology has made the entry barrier significantly lower data centre participation and in this paper, we describe the implementation and results of this change. We also present other recent changes and improvements in the project. In early 2017 the ATLAS@Home project was merged into a combined LHC@Home platform, providing a unified gateway to all CERN-related volunteer computing projects. The ATLAS Event Service shifts data processing from file-level to event-level and we describe how ATLAS@Home was incorporated into this new paradigm. The finishing...

  16. Individual monitoring program for occupational exposures to radionuclides by inhalation

    International Nuclear Information System (INIS)

    Piechowski, J.; Menoux, B.

    1985-01-01

    Individual monitoring of exposure to radioactive products is carried out when there is a risk of significant internal contamination. In its publications 26 and 35 the International Commission on Radiological Protection has given recommendations on the monitoring programs. Besides, the metabolic models developed in publication 30 have allowed to establish retention and excretion functions for some radionuclides after intake by inhalation in the adult man. These have been published in the report CEA-R--5266. Considering these data and taking into account the practical problems that occur in the course of surveillance of workers, programs of individual monitoring for contamination by inhalation are proposed. These programs for routine and special monitoring have been developed for the most common radionuclides involved in the nuclear industry [fr

  17. ATLAS TDAQ System Administration: an overview and evolution

    CERN Document Server

    LEE, CJ; The ATLAS collaboration; BOGDANCHIKOV, A; BRASOLIN, F; CONTESCU, AC; DARLEA, G-L; KOROL, A; SCANNICCHIO, DA; TWOMEY, M; VALSAN, ML

    2013-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The system processes the direct data readout from ~100 million channels on the detector through multiple trigger levels, selecting interesting events for analysis with a factor of $10^{7}$ reduction on the data rate with a latency of less than a few seconds. Most of the functionality is implemented on ~3000 servers composing the online farm. Due to the critical functionality of the system a sophisticated computing environment is maintained, covering the online farm and ATLAS control rooms, as well as a number of development and testing labs. The specificity of the system required the development of dedicated applications (e.g. ConfDB, BWM) for system configuration and maintenance; in parallel other Open Source tools (Puppet and Quattor) are used to centrally configure the operating systems. The health monitoring of the TDAQ s...

  18. ATLAS TDAQ System Administration: an overview and evolution

    CERN Document Server

    LEE, CJ; The ATLAS collaboration; BOGDANCHIKOV, A; BRASOLIN, F; CONTESCU, AC; DARLEA, GL; KOROL, A; SCANNICCHIO, DA; TWOMEY, M; VALSAN, ML

    2013-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The system processes the direct data readout from ~100 million channels on the detector through three trigger levels, selecting interesting events for analysis with a factor of 10^7 reduction on the data rate with a latency of less than a few seconds. Most of the functionality is implemented on ~3000 servers composing the online farm. Due to the critical functionality of the system a sophisticated computing environment is maintained, covering the online farm and ATLAS control rooms, as well as a number of development and testing labs. The specificity of the system required the development of dedicated applications (e.g. ConfDB, BWM) for system configuration and maintenance; in parallel other Open Source tools (Puppet and Quattor) are used to centrally configure the operating systems. The health monitoring of the TDAQ system h...

  19. Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

    CERN Document Server

    Vaniachine, A; The ATLAS collaboration; Karpenko, D

    2013-01-01

    During three years of LHC data taking, the ATLAS collaboration completed three petascale data reprocessing campaigns on the Grid, with up to 2 PB of data being reprocessed every year. In reprocessing on the Grid, failures can occur for a variety of reasons, while Grid heterogeneity makes failures hard to diagnose and repair quickly. As a result, Big Data processing on the Grid must tolerate a continuous stream of failures, errors and faults. While ATLAS fault-tolerance mechanisms improve the reliability of Big Data processing in the Grid, their benefits come at costs and result in delays making the performance prediction difficult. Reliability Engineering provides a framework for fundamental understanding of the Big Data processing on the Grid, which is not a desirable enhancement but a necessary requirement. In ATLAS, cost monitoring and performance prediction became critical for the success of the reprocessing campaigns conducted in preparation for the major physics conferences. In addition, our Reliability...

  20. Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

    CERN Document Server

    Vaniachine, A; The ATLAS collaboration; Karpenko, D

    2014-01-01

    During three years of LHC data taking, the ATLAS collaboration completed three petascale data reprocessing campaigns on the Grid, with up to 2 PB of data being reprocessed every year. In reprocessing on the Grid, failures can occur for a variety of reasons, while Grid heterogeneity makes failures hard to diagnose and repair quickly. As a result, Big Data processing on the Grid must tolerate a continuous stream of failures, errors and faults. While ATLAS fault-tolerance mechanisms improve the reliability of Big Data processing in the Grid, their benefits come at costs and result in delays making the performance prediction difficult. Reliability Engineering provides a framework for fundamental understanding of the Big Data processing on the Grid, which is not a desirable enhancement but a necessary requirement. In ATLAS, cost monitoring and performance prediction became critical for the success of the reprocessing campaigns conducted in preparation for the major physics conferences. In addition, our Reliability...

  1. Operating Experiences of a Loss of Voltage Monitoring Program

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun-Chan [Korea Hydro and Nuclear Power Co. Ltd., Daejeon (Korea, Republic of)

    2015-10-15

    Loss of voltage (LOV) events continue to occur due to inadequate work management and random human errors. On February 26, 2015, regulators analyzed the root causes of LOV events and presented the results for the nuclear industry. Currently, KHNP uses a risk monitoring program, which is named 'LOV Monitor', for LOV prevention during pilot plant outages. This review introduces the operation experiences of LOV Monitor based on the evaluation results of a real event. The operation experiences of LOV Monitor in the pilot plants confirmed that this program could detect and reduce LOV possibilities from scheduling errors such as the simultaneous maintenance of energized trains and de-energized trains considering the physical conditions of the power circuit breakers. However, a maintenance culture that heeds the risk monitoring result must be strengthened in order to obtain substantial effects through applying LOV Monitor to the outage.

  2. ATLAS Distributed Computing: Experience and Evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2013-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb-1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centers around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics program including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2014 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  3. The ATLAS Detector Control System

    International Nuclear Information System (INIS)

    Lantzsch, K; Braun, H; Hirschbuehl, D; Kersten, S; Arfaoui, S; Franz, S; Gutzwiller, O; Schlenker, S; Tsarouchas, C A; Mindur, B; Hartert, J; Zimmermann, S; Talyshev, A; Oliveira Damazio, D; Poblaguev, A; Martin, T; Thompson, P D; Caforio, D; Sbarra, C; Hoffmann, D

    2012-01-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC) at CERN, constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub detectors as well as the common experimental infrastructure are controlled and monitored by the Detector Control System (DCS) using a highly distributed system of 140 server machines running the industrial SCADA product PVSS. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, manage the communication with external systems such as the LHC controls, and provide a synchronization mechanism with the ATLAS data acquisition system. Different databases are used to store the online parameters of the experiment, replicate a subset used for physics reconstruction, and store the configuration parameters of the systems. This contribution describes the computing architecture and software tools to handle this complex and highly interconnected control system.

  4. The ATLAS Detector Control System

    Science.gov (United States)

    Lantzsch, K.; Arfaoui, S.; Franz, S.; Gutzwiller, O.; Schlenker, S.; Tsarouchas, C. A.; Mindur, B.; Hartert, J.; Zimmermann, S.; Talyshev, A.; Oliveira Damazio, D.; Poblaguev, A.; Braun, H.; Hirschbuehl, D.; Kersten, S.; Martin, T.; Thompson, P. D.; Caforio, D.; Sbarra, C.; Hoffmann, D.; Nemecek, S.; Robichaud-Veronneau, A.; Wynne, B.; Banas, E.; Hajduk, Z.; Olszowska, J.; Stanecka, E.; Bindi, M.; Polini, A.; Deliyergiyev, M.; Mandic, I.; Ertel, E.; Marques Vinagre, F.; Ribeiro, G.; Santos, H. F.; Barillari, T.; Habring, J.; Huber, J.; Arabidze, G.; Boterenbrood, H.; Hart, R.; Iakovidis, G.; Karakostas, K.; Leontsinis, S.; Mountricha, E.; Ntekas, K.; Filimonov, V.; Khomutnikov, V.; Kovalenko, S.; Grassi, V.; Mitrevski, J.; Phillips, P.; Chekulaev, S.; D'Auria, S.; Nagai, K.; Tartarelli, G. F.; Aielli, G.; Marchese, F.; Lafarguette, P.; Brenner, R.

    2012-12-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC) at CERN, constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub detectors as well as the common experimental infrastructure are controlled and monitored by the Detector Control System (DCS) using a highly distributed system of 140 server machines running the industrial SCADA product PVSS. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, manage the communication with external systems such as the LHC controls, and provide a synchronization mechanism with the ATLAS data acquisition system. Different databases are used to store the online parameters of the experiment, replicate a subset used for physics reconstruction, and store the configuration parameters of the systems. This contribution describes the computing architecture and software tools to handle this complex and highly interconnected control system.

  5. Liquid Effluent Monitoring Program at the Pacific Northwest Laboratory

    International Nuclear Information System (INIS)

    Ballinger, M.Y.

    1995-05-01

    Pacific Northwest Laboratory (PNL) is conducting a program to monitor the waste water from PNL-operated research and development facilities on the Hanford Site. The purpose of the program is to collect data to assess administrative controls and to determine whether discharges to the process sewer meet sewer criteria. Samples have been collected on a regular basis from the major PNL facilities on the Hanford Site since March 1994. A broad range of analyses has been performed to determine the primary constituents in the liquid effluent. The sampling program is briefly summarized in the paper. Continuous monitoring of pH, conductivity, and flow also provides data on the liquid effluent streams. In addition to sampling and monitoring, the program is evaluating the dynamics of the waste stream with dye studies and is evaluating the use of newer technologies for potential deployment in future sampling/monitoring efforts. Information collected to date has been valuable in determining sources of constituents that may be higher than the Waste Acceptance Criteria (WAC) for the Treated Effluent Disposal Facility (TEDF). This facility treats the waste streams before discharge to the Columbia River

  6. Overview of national bird population monitoring programs and databases

    Science.gov (United States)

    Gregory S. Butcher; Bruce Peterjohn; C. John Ralph

    1993-01-01

    A number of programs have been set up to monitor populations of nongame migratory birds. We review these programs and their purposes and provide information on obtaining data or results from these programs. In addition, we review recommendations for improving these programs.

  7. The effect of morphometric atlas selection on multi-atlas-based automatic brachial plexus segmentation

    International Nuclear Information System (INIS)

    Van de Velde, Joris; Wouters, Johan; Vercauteren, Tom; De Gersem, Werner; Achten, Eric; De Neve, Wilfried; Van Hoof, Tom

    2015-01-01

    The present study aimed to measure the effect of a morphometric atlas selection strategy on the accuracy of multi-atlas-based BP autosegmentation using the commercially available software package ADMIRE® and to determine the optimal number of selected atlases to use. Autosegmentation accuracy was measured by comparing all generated automatic BP segmentations with anatomically validated gold standard segmentations that were developed using cadavers. Twelve cadaver computed tomography (CT) atlases were included in the study. One atlas was selected as a patient in ADMIRE®, and multi-atlas-based BP autosegmentation was first performed with a group of morphometrically preselected atlases. In this group, the atlases were selected on the basis of similarity in the shoulder protraction position with the patient. The number of selected atlases used started at two and increased up to eight. Subsequently, a group of randomly chosen, non-selected atlases were taken. In this second group, every possible combination of 2 to 8 random atlases was used for multi-atlas-based BP autosegmentation. For both groups, the average Dice similarity coefficient (DSC), Jaccard index (JI) and Inclusion index (INI) were calculated, measuring the similarity of the generated automatic BP segmentations and the gold standard segmentation. Similarity indices of both groups were compared using an independent sample t-test, and the optimal number of selected atlases was investigated using an equivalence trial. For each number of atlases, average similarity indices of the morphometrically selected atlas group were significantly higher than the random group (p < 0,05). In this study, the highest similarity indices were achieved using multi-atlas autosegmentation with 6 selected atlases (average DSC = 0,598; average JI = 0,434; average INI = 0,733). Morphometric atlas selection on the basis of the protraction position of the patient significantly improves multi-atlas-based BP autosegmentation accuracy

  8. Atlas – a data warehouse for integrative bioinformatics

    Directory of Open Access Journals (Sweden)

    Yuen Macaire MS

    2005-02-01

    Full Text Available Abstract Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL calls that are implemented in a set of Application Programming Interfaces (APIs. The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD, Biomolecular Interaction Network Database (BIND, Database of Interacting Proteins (DIP, Molecular Interactions Database (MINT, IntAct, NCBI Taxonomy, Gene Ontology (GO, Online Mendelian Inheritance in Man (OMIM, LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First

  9. IT Infrastructure Design and Implementation Considerations for the ATLAS TDAQ System

    CERN Document Server

    Dobson, M; The ATLAS collaboration; Caramarcu, C; Dumitru, I; Valsan, L; Darlea, G L; Bujor, F; Bogdanchikov, A G; Korol, A A; Zaytsev, A S; Ballestrero, S

    2013-01-01

    This paper gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting Front End detector hardware, Data Flow, Event Filter and other subsystems of the ATLAS detector operating on the LHC accelerator at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, a high performance centralized storage system, about 50 multi-screen user interface systems installed in the control rooms and various hardware and critical service monitoring machines. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The ATLAS TDAQ computing environment is now serving more than 3000 users subdivided into approximately 300 categories in correspondence with their roles in the system. The access and role management system is custom built on top of an LDAP schema. The engineering infrastructure of the ATLAS ...

  10. Networks in ATLAS

    CERN Document Server

    Mc Kee, Shawn Patrick; The ATLAS collaboration

    2016-01-01

    Networks have played a critical role in high-energy physics (HEP), enabling us to access and effectively utilize globally distributed resources to meet the needs of our physicists. Because of their importance in enabling our grid computing infrastructure many physicists have taken leading roles in research and education (R&E) networking, participating in, and even convening, network related meetings and research programs with the broader networking community worldwide. This has led to HEP benefiting from excellent global networking capabilities for little to no direct cost. However, as other science domains ramp-up their need for similar networking it becomes less clear that this situation will continue unchanged. What this means for ATLAS in particular needs to be understood. ATLAS has evolved its computing model since the LHC started based upon its experience with using globally distributed resources. The most significant theme of those changes has been increased reliance upon, and use of, its networks....

  11. Networks in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00260714; The ATLAS collaboration

    2017-01-01

    Networks have played a critical role in high-energy physics (HEP), enabling us to access and effectively utilize globally distributed resources to meet the needs of our physicists. Because of their importance in enabling our grid computing infrastructure many physicists have taken leading roles in research and education (R&E) networking, participating in, and even convening, network related meetings and research programs with the broader networking community worldwide. This has led to HEP benefiting from excellent global networking capabilities for little to no direct cost. However, as other science domains ramp-up their need for similar networking it becomes less clear that this situation will continue unchanged. What this means for ATLAS in particular needs to be understood. ATLAS has evolved its computing model since the LHC started based upon its experience with using globally distributed resources. The most significant theme of those changes has been increased reliance upon, and use of, its networks....

  12. The data collection component of the Hanford Meteorology Monitoring Program

    Energy Technology Data Exchange (ETDEWEB)

    Glantz, C.S.; Islam, M.M.

    1988-09-01

    An intensive program of meteorological monitoring is in place at the US Department of Energy's Hanford Site. The Hanford Meteorology Monitoring Program involves the measurement, observation, and storage of various meteorological data; continuous monitoring of regional weather conditions by a staff of professional meteorologists; and around-the-clock forecasting of weather conditions for the Hanford Site. The objective of this report is to document the data collection component of the program. In this report, each meteorological monitoring site is discussed in detail. Each site's location and instrumentation are described and photographs are presented. The methods for processing and communicating data to the Hanford Meteorology Station are also discussed. Finally, the procedures followed to maintain and calibrate these instruments are presented. 2 refs., 83 figs., 15 tabs.

  13. Environmental monitoring program of CDTN

    International Nuclear Information System (INIS)

    Ferreira, E.G.

    1985-09-01

    This environmental monitoring program of CDTN aim to do a survey that permit to verify if the radioactive wastes release by CDTN agree with basic rudiments of radioprotection, evaluate the environmental impact, verify the adjustment of using proceedings to effluents control, to evaluate the maximum radiation doses that public persons will be able to get yearly. (C.M.) [pt

  14. Oak Ridge Y-12 Plant Biological Monitoring and Abatement Program Plan

    Energy Technology Data Exchange (ETDEWEB)

    Adams, S.M.; Brandt, C.C.; Christensen, S.W.; Greeley, M.S.JR.; Hill, W.R.; Peterson, M.J.; Ryon, M.G.; Smith, J.G.; Southworth, G.R.; Stewart, A.J.

    2000-09-01

    The revised Biological Monitoring and Abatement Program (BMAP) for East Fork Poplar Creek (EFPC) at the Oak Ridge Y-12 Plant, as described, will be conducted as required by the National Pollutant Discharge Elimination System permit issued for the Y-12 Plant on April 28, 1995 and became effective July 1, 1995. The basic approach to biological monitoring used in this program was developed by the staff in the Environmental Science Division (ESD) at the Oak Ridge National Laboratory (ORNL) at the request of the Y-12 Plant. The revision to the BMAP plan is based on results of biological monitoring conducted during the period of 1985 to present. Details of the specific procedures used in the current routine monitoring program are provided; experimental designs for future studies are described in less detail. The overall strategy used in developing this plan was, and continues to be, to use the results obtained from each task to define the scope of future monitoring efforts. Such efforts may require more intensive sampling than initially proposed in some areas (e.g., additional bioaccumulation monitoring if results indicate unexpectedly high PCBs or Hg) or a reduction in sampling intensity in others (e.g., reduction in the number of sampling sites when no impact is still observed). The program scope will be re-evaluated annually. By using the results of previous monitoring efforts to define the current program and to guide us in the development of future studies, an effective integrated monitoring program has been developed to assess the impacts of Y-12 Plant operations (past and present) on the biota of EFPC and to document the ecological effects of remedial actions.

  15. Integrated environmental monitoring program at the Hanford Site

    International Nuclear Information System (INIS)

    Jaquish, R.E.

    1990-08-01

    The US Department of Energy's Hanford Site, north of Richland, Washington, has a mission of defense production, waste management, environmental restoration, advanced reactor design, and research development. Environmental programs at Hanford are conducted by Pacific Northwest Laboratory (PNL) and the Westinghouse Hanford Company (WHC). The WHC environmental programs include the compliance and surveillance activities associated with site operations and waste management. The PNL environmental programs address the site-wide and the of-site areas. They include the environmental surveillance and the associated support activities, such as dose calculations, and also the monitoring of environmental conditions to comply with federal and state environmental regulations on wildlife and cultural resources. These are called ''independent environmental programs'' in that they are conducted completely separate from site operations. The Environmental Surveillance and Oversight Program consists of the following projects: surface environmental surveillance; ground-water surveillance; wildlife resources monitoring; cultural resources; dose overview; radiation standards and calibrations; meteorological and climatological services; emergency preparedness

  16. A recommended program of tritium monitoring research and development

    International Nuclear Information System (INIS)

    Nickerson, S.B.; Gerdingh, R.F.; Penfold, K.

    1982-10-01

    This report presents recommendations for programs of research and development in tritium monitoring instrumentation. These recommendations, if implemented, will offer Canadian industry the opportunity to develop marketable instruments. The major recommendations are to assist in the development and promotion of two Chalk River Nuclear Laboratories' monitors and an Ontario Hydro monitor, and to support research and development of a surface monitor

  17. The Savannah River Site`s Groundwater Monitoring Program. Fourth quarter 1992

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-17

    This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted by the Environmental Protection Department`s Environmental Monitoring Section (EPD/EMS) during the fourth quarter of 1992. It includes the analytical data, field data, data review, quality control, and other documentation for this program, provides a record of the program`s activities; and serves as an official document of the analytical results.

  18. ATLAS Outreach Highlights

    CERN Document Server

    Cheatham, Susan; The ATLAS collaboration

    2016-01-01

    The ATLAS outreach team is very active, promoting particle physics to a broad range of audiences including physicists, general public, policy makers, students and teachers, and media. A selection of current outreach activities and new projects will be presented. Recent highlights include the new ATLAS public website and ATLAS Open Data, the very recent public release of 1 fb-1 of ATLAS data.

  19. Review of four major environmental effects monitoring programs in the oil sands region

    International Nuclear Information System (INIS)

    Lott, E.O.; Jones, R.K.

    2010-10-01

    The lack of knowledge on current environmental effects monitoring programs for the mineable oil sands region generates a low public confidence in environment health monitoring and reporting programs for the oil sands operations. In 2010, the Oil Sands Research and Information Network (OSRIN) supervised a study reviewing the major environmental effects monitoring programs that are underway in the Regional Municipality of Wood Buffalo. Four main environmental effects monitoring and reporting organizations existing in the oil sands area were engaged to describe their programs through this study: Alberta Biodiversity Monitoring Institute (ABMI), Cumulative Environmental Management Association (CEMA), Regional Aquatic Monitoring Program (RAMP), Wood Buffalo Environmental Association (WBEA). These different organizations have specific roles in providing information, data and understanding of ecosystem effects. A one page visual summary of environmental effects monitoring in the oil sands area resulted from the information received from these organizations and detailed fact sheets were presented for each one of the programs. The report of this study also presents seven other environmental monitoring initiatives or organizations such as Alberta Environment and Environment Canada environmental effects monitoring program. The main observation that emerged from the review was the lack of detailed understanding shown by the stakeholders regarding the monitoring activities performed in the oil sands area. There is a lack of communication of the different programs that are conducted in the region. The study also pointed out that no efforts were put in cross-linking the various programs to be assured that every concerns related to environmental effects associated with oil sands operations were addressed. A better understanding of environmental effects and an improvement in public confidence in the data and its interpretation would probably be observed with the establishment of a

  20. Supervision of the ATLAS High Level Trigger System

    CERN Document Server

    Wheeler, S.; Meessen, C.; Qian, Z.; Touchard, F.; Negri, France A.; Zobernig, H.; CHEP 2003 Computing in High Energy Physics; Negri, France A.

    2003-01-01

    The ATLAS High Level Trigger (HLT) system provides software-based event selection after the initial LVL1 hardware trigger. It is composed of two stages, the LVL2 trigger and the Event Filter. The HLT is implemented as software tasks running on large processor farms. An essential part of the HLT is the supervision system, which is responsible for configuring, coordinating, controlling and monitoring the many hundreds of processes running in the HLT. A prototype implementation of the supervision system, using tools from the ATLAS Online Software system is presented. Results from scalability tests are also presented where the supervision system was shown to be capable of controlling over 1000 HLT processes running on 230 nodes.

  1. CERN Open Days 2013, Point 1 - ATLAS: ATLAS Experiment

    CERN Multimedia

    CERN Photolab

    2013-01-01

    Stand description: The ATLAS Experiment at CERN is one of the largest and most complex scientific endeavours ever assembled. The detector, located at collision point 1 of the LHC, is designed to explore the fundamental components of nature and to study the forces that shape our universe. The past year’s discovery of a Higgs boson is one of the most important scientific achievements of our time, yet this is only one of many key goals of ATLAS. During a brief break in their journey, some of the 3000-member ATLAS collaboration will be taking time to share the excitement of this exploration with you. On surface no restricted access  The exhibit at Point 1 will give visitors a chance to meet these modern-day explorers and to learn from them how answers to the most fundamental questions of mankind are being sought. Activities will include a visit to the ATLAS detector, located 80m below ground; watching the prize-winning ATLAS movie in the ATLAS cinema; seeing real particle tracks in a cloud chamber and discussi...

  2. Reliability engineering analysis of ATLAS data reprocessing campaigns

    International Nuclear Information System (INIS)

    Vaniachine, A; Golubkov, D; Karpenko, D

    2014-01-01

    During three years of LHC data taking, the ATLAS collaboration completed three petascale data reprocessing campaigns on the Grid, with up to 2 PB of data being reprocessed every year. In reprocessing on the Grid, failures can occur for a variety of reasons, while Grid heterogeneity makes failures hard to diagnose and repair quickly. As a result, Big Data processing on the Grid must tolerate a continuous stream of failures, errors and faults. While ATLAS fault-tolerance mechanisms improve the reliability of Big Data processing in the Grid, their benefits come at costs and result in delays making the performance prediction difficult. Reliability Engineering provides a framework for fundamental understanding of the Big Data processing on the Grid, which is not a desirable enhancement but a necessary requirement. In ATLAS, cost monitoring and performance prediction became critical for the success of the reprocessing campaigns conducted in preparation for the major physics conferences. In addition, our Reliability Engineering approach supported continuous improvements in data reprocessing throughput during LHC data taking. The throughput doubled in 2011 vs. 2010 reprocessing, then quadrupled in 2012 vs. 2011 reprocessing. We present the Reliability Engineering analysis of ATLAS data reprocessing campaigns providing the foundation needed to scale up the Big Data processing technologies beyond the petascale.

  3. Process monitoring using a Quality and Technical Surveillance Program

    International Nuclear Information System (INIS)

    Rafferty, C.A.

    1995-01-01

    The purpose of process monitoring using a Quality and Technical Surveillance Program was to help ensure manufactured clad vents sets fully met technical and quality requirements established by the manufacturer and the customer, and that line and program management were immediately alerted if any aspect of the manufacturing activities drifted out of acceptable limits. The Quality and Technical Surveillance Program provided a planned, scheduled approach to monitor key processes and documentation illuminated potential problem areas early enough to permit timely corrective actions to reverse negative trends that, if left uncorrected, could have resulted in deficient hardware. Significant schedule and cost impacts were eliminated

  4. Online radiation dose measurement system for ATLAS experiment

    International Nuclear Information System (INIS)

    Mandic, I.; Cindro, V.; Dolenc, I.; Gorisek, A.; Kramberger, G.; Mikuz, M.; Bronner, J.; Hartet, J.; Franz, S.

    2009-01-01

    In experiments at Large Hadron Collider, detectors and electronics will be exposed to high fluxes of photons, charged particles and neutrons. Damage caused by the radiation will influence performance of detectors. It will therefore be important to continuously monitor the radiation dose in order to follow the level of degradation of detectors and electronics and to correctly predict future radiation damage. A system for online radiation monitoring using semiconductor radiation sensors at large number of locations has been installed in the ATLAS experiment. Ionizing dose in SiO 2 will be measured with RadFETs, displacement damage in silicon in units of 1-MeV(Si) equivalent neutron fluence with p-i-n diodes. At 14 monitoring locations where highest radiation levels are expected the fluence of thermal neutrons will be measured from current gain degradation in dedicated bipolar transistors. The design of the system and tests of its performance in mixed radiation field is described in this paper. First results from this test campaign confirm that doses can be measured with sufficient sensitivity (mGy for total ionizing dose measurements, 10 9 n/cm 2 for NIEL (non-ionizing energy loss) measurements, 10 12 n/cm 2 for thermal neutrons) and accuracy (about 20%) for usage in the ATLAS detector

  5. 24 CFR 266.115 - Program monitoring and evaluation.

    Science.gov (United States)

    2010-04-01

    ... AUTHORITIES HOUSING FINANCE AGENCY RISK-SHARING PROGRAM FOR INSURED AFFORDABLE MULTIFAMILY PROJECT LOANS Housing Finance Agency Requirements § 266.115 Program monitoring and evaluation. (a) HFA certifications... under this part, basic underwriting and closing information must be submitted in a format specified by...

  6. Ontario hydro's aqueous discharge monitoring program

    International Nuclear Information System (INIS)

    Mehdi, S.H.; Booth, M.R.; Massey, R.; Herrmann, O.

    1992-01-01

    The Province of Ontario has legislated a comprehensive monitoring program for waterborne trace contaminants called MISA - Municipal Industrial Strategy for Abatement. The electric power sector regulation applies to all generating stations (Thermal, Nuclear, Hydraulic). The program commenced in June, 1990. The current phase of the regulation requires the operators of the plants to measure the detailed composition of the direct discharges to water for a one year period. Samples are to be taken from about 350 identified streams at frequencies varying from continuous and daily to quarterly. The data from this program will be used to determine the scope of the ongoing monitoring program and control. This paper discusses the preparation and planning, commissioning, training and early operations phase of the MISA program. In response, the central Analytical Laboratory and Environmental staff worked to develop a sampling and analytical approach which uses the plant laboratories, the central analytical laboratory and a variety of external laboratories. The approach considered analytical frequency, sample stability, presence of radioactivity, suitability of staff, laboratory qualifications, need for long term internal capabilities, availability of equipment, difficulty of analysis, relationship to other work and problems, capital and operating costs. The complexity of the sampling program required the development of a computer based schedule to ensure that all required samples were taken as required with phase shifts between major sampling events at different plants to prevent swamping the capability of the central or external laboratories. New equipment has been purchased and installed at each plant to collect 24 hour composite samples. Analytical equipment has been purchased for each plant for analysis of perishable analytes or of samples requiring daily or thrice weekly analysis. Training programs and surveys have been implemented to assure production of valid data

  7. Technical basis and evaluation criteria for an air sampling/monitoring program

    International Nuclear Information System (INIS)

    Gregory, D.C.; Bryan, W.L.; Falter, K.G.

    1993-01-01

    Air sampling and monitoring programs at DOE facilities need to be reviewed in light of revised requirements and guidance found in, for example, DOE Order 5480.6 (RadCon Manual). Accordingly, the Oak Ridge National Laboratory (ORNL) air monitoring program is being revised and placed on a sound technical basis. A draft technical basis document has been written to establish placement criteria for instruments and to guide the ''retrospective sampling or real-time monitoring'' decision. Facility evaluations are being used to document air sampling/monitoring needs, and instruments are being evaluated in light of these needs. The steps used to develop this program and the technical basis for instrument placement are described

  8. The ATLAS Trigger algorithms upgrade and performance in Run 2

    CERN Document Server

    Bernius, Catrin; The ATLAS collaboration

    2017-01-01

    Title: The ATLAS Trigger algorithms upgrade and performance in Run 2 (TDAQ) The ATLAS trigger has been used very successfully for the online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a center-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. At the Level-1 trigger the undertaken impr...

  9. ATLAS distributed computing operation shift teams experience during the discovery year and beginning of the long shutdown 1

    International Nuclear Information System (INIS)

    Sedov, Alexey; Girolamo, Alessandro Di; Negri, Guidone; Sakamoto, Hiroshi; Schovancová, Jaroslava; Smirnov, Iouri; Vartapetian, Armen; Yu, Jaehoon

    2014-01-01

    ATLAS Distributed Computing Operation Shifts evolve to meet new requirements. New monitoring tools as well as operational changes lead to modifications in organization of shifts. In this paper we describe the structure of shifts, the roles of different shifts in ATLAS computing grid operation, the influence of a Higgs-like particle discovery on shift operation, the achievements in monitoring and automation that allowed extra focus on the experiment priority tasks, and the influence of the Long Shutdown 1 and operational changes related to the no beam period.

  10. AFRRI TRIGA Reactor water quality monitoring program

    International Nuclear Information System (INIS)

    Moore, Mark; George, Robert; Spence, Harry; Nguyen, John

    1992-01-01

    AFRRI has started a water quality monitoring program to provide base line data for early detection of tank leaks. This program revealed problems with growth of algae and bacteria in the pool as a result of contamination with nitrogenous matter. Steps have been taken to reduce the nitrogen levels and to kill and remove algae and bacteria from the reactor pool. (author)

  11. Distributed analysis functional testing using GangaRobot in the ATLAS experiment

    Science.gov (United States)

    Legger, Federica; ATLAS Collaboration

    2011-12-01

    Automated distributed analysis tests are necessary to ensure smooth operations of the ATLAS grid resources. The HammerCloud framework allows for easy definition, submission and monitoring of grid test applications. Both functional and stress test applications can be defined in HammerCloud. Stress tests are large-scale tests meant to verify the behaviour of sites under heavy load. Functional tests are light user applications running at each site with high frequency, to ensure that the site functionalities are available at all times. Success or failure rates of these tests jobs are individually monitored. Test definitions and results are stored in a database and made available to users and site administrators through a web interface. In this work we present the recent developments of the GangaRobot framework. GangaRobot monitors the outcome of functional tests, creates a blacklist of sites failing the tests, and exports the results to the ATLAS Site Status Board (SSB) and to the Service Availability Monitor (SAM), providing on the one hand a fast way to identify systematic or temporary site failures, and on the other hand allowing for an effective distribution of the work load on the available resources.

  12. Mechanical behavior of the ATLAS B0 model coil

    CERN Document Server

    Foussat, A; Acerbi, E; Alessandria, F; Berthier, R; Broggi, F; Daël, A; Dudarev, A; Mayri, C; Miele, P; Reytier, M; Rossi, L; Sorbi, M; Sun, Z; ten Kate, H H J; Vanenkov, I; Volpini, G

    2002-01-01

    The ATLAS B0 model coil has been developed and constructed to verify the design parameters and the manufacture techniques of the Barrel Toroid coils (BT) that are under construction for the ATLAS Detector. Essential for successful operation is the mechanical behavior of the superconducting coil and its support structure. In the ATLAS magnet test facility, a magnetic mirror is used to reproduce in the model coil the electromagnetic forces of the BT coils when assembled in the final Barrel Toroid magnet system. The model coil is extensively equipped with mechanical instrumentation to monitor stresses and force levels as well as contraction during a cooling down and excitation up to nominal current. The installed set up of strain gauges, position sensors and capacitive force transducers is presented. Moreover the first mechanical results in terms of expected main stress, strain and deformation values are presented based on detailed mechanical analysis of the design. (7 refs).

  13. EnviroAtlas - Rare Ecosystems in the Conterminous United States

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset identifies rare ecosystems using base landcover data from the USGS GAP Analysis Program (Version 2, 2011) combined with landscape ecology...

  14. Exotics searches in ATLAS

    CERN Document Server

    Wang, Renjie; The ATLAS collaboration

    2017-01-01

    Many theories beyond the Standard Model predict new physics accessible by the LHC. The ATLAS experiment all have rigorous search programs ongoing with the aim to find indications for new physics involving state of the art analysis techniques. This talk reports on new results obtained using the pp collision data sample collected in 2015 and 2016 at the LHC with a centre-of-mass energy of 13 TeV.

  15. Measurement and monitoring technologies are important SITE program component

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    An ongoing component of the Superfund Innovative Technologies Evaluation (SITE) Program, managed by the US EPA at its Hazardous Waste Engineering Research Laboratory in Cincinnati, is the development and demonstration of new and innovative measurement and monitoring technologies that will be applicable to Superfund site characterization. There are four important roles for monitoring and measurement technologies at Superfund sites: (1) to assess the extent of contamination at a site, (2) to supply data and information to determine impacts to human health and the environment, (3) to supply data to select the appropriate remedial action, and (4) to monitor the success or effectiveness of the selected remedy. The Environmental Monitoring Systems Laboratory in Las Vegas, Nevada (EMSL-LV) has been supporting the development of improved measurement and monitoring techniques in conjunction with the SITE Program with a focus on two areas: Immunoassay for toxic substances and fiber optic sensing for in-situ analysis at Superfund sites

  16. ATLAS Thesis Award 2017

    CERN Multimedia

    Anthony, Katarina

    2018-01-01

    Winners of the ATLAS Thesis Award were presented with certificates and glass cubes during a ceremony on 22 February, 2018. They are pictured here with Karl Jakobs (ATLAS Spokesperson), Max Klein (ATLAS Collaboration Board Chair) and Katsuo Tokushuku (ATLAS Collaboration Board Deputy Chair).

  17. The Stockpile Monitor Program

    International Nuclear Information System (INIS)

    Buntain, G.A.; Fletcher, M.; Rabie, R.

    1994-07-01

    Recent political changes have led to drastic reductions in the number of nuclear warheads in stockpile, as well as increased expectations for warhead-service lives. In order to support and maintain a shrinking and aging nuclear stockpile, weapon scientists and engineers need detailed information describing the environments experienced by weapons in the field. Hence, the Stockpile Monitor Program was initiated in 1991 to develop a comprehensive and accurate database of temperature and humidity conditions experienced by nuclear warheads both in storage and on-alert

  18. ATLAS

    CERN Multimedia

    Akhnazarov, V; Canepa, A; Bremer, J; Burckhart, H; Cattai, A; Voss, R; Hervas, L; Kaplon, J; Nessi, M; Werner, P; Ten kate, H; Tyrvainen, H; Vandelli, W; Krasznahorkay, A; Gray, H; Alvarez gonzalez, B; Eifert, T F; Rolando, G; Oide, H; Barak, L; Glatzer, J; Backhaus, M; Schaefer, D M; Maciejewski, J P; Milic, A; Jin, S; Von torne, E; Limbach, C; Medinnis, M J; Gregor, I; Levonian, S; Schmitt, S; Waananen, A; Monnier, E; Muanza, S G; Pralavorio, P; Talby, M; Tiouchichine, E; Tocut, V M; Rybkin, G; Wang, S; Lacour, D; Laforge, B; Ocariz, J H; Bertoli, W; Malaescu, B; Sbarra, C; Yamamoto, A; Sasaki, O; Koriki, T; Hara, K; Da silva gomes, A; Carvalho maneira, J; Marcalo da palma, A; Chekulaev, S; Tikhomirov, V; Snesarev, A; Buzykaev, A; Maslennikov, A; Peleganchuk, S; Sukharev, A; Kaplan, B E; Swiatlowski, M J; Nef, P D; Schnoor, U; Oakham, G F; Ueno, R; Orr, R S; Abouzeid, O; Haug, S; Peng, H; Kus, V; Vitek, M; Temming, K K; Dang, N P; Meier, K; Schultz-coulon, H; Geisler, M P; Sander, H; Schaefer, U; Ellinghaus, F; Rieke, S; Nussbaumer, A; Liu, Y; Richter, R; Kortner, S; Fernandez-bosman, M; Ullan comes, M; Espinal curull, J; Chiriotti alvarez, S; Caubet serrabou, M; Valladolid gallego, E; Kaci, M; Carrasco vela, N; Lancon, E C; Besson, N E; Gautard, V; Bracinik, J; Bartsch, V C; Potter, C J; Lester, C G; Moeller, V A; Rosten, J; Crooks, D; Mathieson, K; Houston, S C; Wright, M; Jones, T W; Harris, O B; Byatt, T J; Dobson, E; Hodgson, P; Hodgkinson, M C; Dris, M; Karakostas, K; Ntekas, K; Oren, D; Duchovni, E; Etzion, E; Oren, Y; Ferrer, L M; Testa, M; Doria, A; Merola, L; Sekhniaidze, G; Giordano, R; Ricciardi, S; Milazzo, A; Falciano, S; De pedis, D; Dionisi, C; Veneziano, S; Cardarelli, R; Verzegnassi, C; Soualah, R; Ochi, A; Ohshima, T; Kishiki, S; Linde, F L; Vreeswijk, M; Werneke, P; Muijs, A; Vankov, P H; Jansweijer, P P M; Dale, O; Lund, E; Bruckman de renstrom, P; Dabrowski, W; Adamek, J D; Wolters, H; Micu, L; Pantea, D; Tudorache, V; Mjoernmark, J; Klimek, P J; Ferrari, A; Abdinov, O; Akhoundov, A; Hashimov, R; Shelkov, G; Khubua, J; Ladygin, E; Lazarev, A; Glagolev, V; Dedovich, D; Lykasov, G; Zhemchugov, A; Zolnikov, Y; Ryabenko, M; Sivoklokov, S; Vasilyev, I; Shalimov, A; Lobanov, M; Paramoshkina, E; Mosidze, M; Bingul, A; Nodulman, L J; Guarino, V J; Yoshida, R; Drake, G R; Calafiura, P; Haber, C; Quarrie, D R; Alonso, J R; Anderson, C; Evans, H; Lammers, S W; Baubock, M; Anderson, K; Petti, R; Suhr, C A; Linnemann, J T; Richards, R A; Tollefson, K A; Holzbauer, J L; Stoker, D P; Pier, S; Nelson, A J; Isakov, V; Martin, A J; Adelman, J A; Paganini, M; Gutierrez, P; Snow, J M; Pearson, B L; Cleland, W E; Savinov, V; Wong, W; Goodson, J J; Li, H; Lacey, R A; Gordeev, A; Gordon, H; Lanni, F; Nevski, P; Rescia, S; Kierstead, J A; Liu, Z; Yu, W W H; Bensinger, J; Hashemi, K S; Bogavac, D; Cindro, V; Hoeferkamp, M R; Coelli, S; Iodice, M; Piegaia, R N; Alonso, F; Wahlberg, H P; Barberio, E L; Limosani, A; Rodd, N L; Jennens, D T; Hill, E C; Pospisil, S; Smolek, K; Schaile, D A; Rauscher, F G; Adomeit, S; Mattig, P M; Wahlen, H; Volkmer, F; Calvente lopez, S; Sanchis peris, E J; Pallin, D; Podlyski, F; Says, L; Boumediene, D E; Scott, W; Phillips, P W; Greenall, A; Turner, P; Gwilliam, C B; Kluge, T; Wrona, B; Sellers, G J; Millward, G; Adragna, P; Hartin, A; Alpigiani, C; Piccaro, E; Bret cano, M; Hughes jones, R E; Mercer, D; Oh, A; Chavda, V S; Carminati, L; Cavasinni, V; Fedin, O; Patrichev, S; Ryabov, Y; Nesterov, S; Grebenyuk, O; Sasso, J; Mahmood, H; Polsdofer, E; Dai, T; Ferretti, C; Liu, H; Hegazy, K H; Benjamin, D P; Zobernig, G; Ban, J; Brooijmans, G H; Keener, P; Williams, H H; Le geyt, B C; Hines, E J; Fadeyev, V; Schumm, B A; Law, A T; Kuhl, A D; Neubauer, M S; Shang, R; Gagliardi, G; Calabro, D; Conta, C; Zinna, M; Jones, G; Li, J; Stradling, A R; Hadavand, H K; Mcguigan, P; Chiu, P; Baldelomar, E; Stroynowski, R A; Kehoe, R L; De groot, N; Timmermans, C; Lach-heb, F; Addy, T N; Nakano, I; Moreno lopez, D; Grosse-knetter, J; Tyson, B; Rude, G D; Tafirout, R; Benoit, P; Danielsson, H O; Elsing, M; Fassnacht, P; Froidevaux, D; Ganis, G; Gorini, B; Lasseur, C; Lehmann miotto, G; Kollar, D; Aleksa, M; Sfyrla, A; Duehrssen-debling, K; Fressard-batraneanu, S; Van der ster, D C; Bortolin, C; Schumacher, J; Mentink, M; Geich-gimbel, C; Yau wong, K H; Lafaye, R; Crepe-renaudin, S; Albrand, S; Hoffmann, D; Pangaud, P; Meessen, C; Hrivnac, J; Vernay, E; Perus, A; Henrot versille, S L; Le dortz, O; Derue, F; Piccinini, M; Polini, A; Terada, S; Arai, Y; Ikeno, M; Fujii, H; Nagano, K; Ukegawa, F; Aguilar saavedra, J A; Conde muino, P; Castro, N F; Eremin, V; Kopytine, M; Sulin, V; Tsukerman, I; Korol, A; Nemethy, P; Bartoldus, R; Glatte, A; Chelsky, S; Van nieuwkoop, J; Bellerive, A; Sinervo, J K; Battaglia, A; Barbier, G J; Pohl, M; Rosselet, L; Alexandre, G B; Prokoshin, F; Pezoa rivera, R A; Batkova, L; Kladiva, E; Stastny, J; Kubes, T; Vidlakova, Z; Esch, H; Homann, M; Herten, L G; Zimmermann, S U; Pfeifer, B; Stenzel, H; Andrei, G V; Wessels, M; Buescher, V; Kleinknecht, K; Fiedler, F M; Schroeder, C D; Fernandez, E; Mir martinez, L; Vorwerk, V; Bernabeu verdu, J; Salt, J; Civera navarrete, J V; Bernard, R; Berriaud, C P; Chevalier, L P; Hubbard, R; Schune, P; Nikolopoulos, K; Batley, J R; Brochu, F M; Phillips, A W; Teixeira-dias, P J; Rose, M B D; Buttar, C; Buckley, A G; Nurse, E L; Larner, A B; Boddy, C; Henderson, J; Costanzo, D; Tarem, S; Maccarrone, G; Laurelli, P F; Alviggi, M; Chiaramonte, R; Izzo, V; Palumbo, V; Fraternali, M; Crosetti, G; Marchese, F; Yamaguchi, Y; Hessey, N P; Mechnich, J M; Liebig, W; Kastanas, K A; Sjursen, T B; Zalieckas, J; Cameron, D G; Banka, P; Kowalewska, A B; Dwuznik, M; Mindur, B; Boldea, V; Hedberg, V; Smirnova, O; Sellden, B; Allahverdiyev, T; Gornushkin, Y; Koultchitski, I; Tokmenin, V; Chizhov, M; Gongadze, A; Khramov, E; Sadykov, R; Krasnoslobodtsev, I; Smirnova, L; Kramarenko, V; Minaenko, A; Zenin, O; Beddall, A J; Ozcan, E V; Hou, S; Wang, S; Moyse, E; Willocq, S; Chekanov, S; Le compte, T J; Love, J R; Ciocio, A; Hinchliffe, I; Tsulaia, V; Gomez, A; Luehring, F; Zieminska, D; Huth, J E; Gonski, J L; Oreglia, M; Tang, F; Shochet, M J; Costin, T; Mcleod, A; Uzunyan, S; Martin, S P; Pope, B G; Schwienhorst, R H; Brau, J E; Ptacek, E S; Milburn, R H; Sabancilar, E; Lauer, R; Saleem, M; Mohamed meera lebbai, M R; Lou, X; Reeves, K B; Rijssenbeek, M; Novakova, P N; Rahm, D; Steinberg, P A; Wenaus, T J; Paige, F; Ye, S; Kotcher, J R; Assamagan, K A; Oliveira damazio, D; Maeno, T; Henry, A; Dushkin, A; Costa, G; Meroni, C; Resconi, S; Lari, T; Biglietti, M; Lohse, T; Gonzalez silva, M L; Monticelli, F G; Saavedra, A F; Patel, N D; Ciodaro xavier, T; Asevedo nepomuceno, A; Lefebvre, M; Albert, J E; Kubik, P; Faltova, J; Turecek, D; Solc, J; Schaile, O; Ebke, J; Losel, P J; Zeitnitz, C; Sturm, P D; Barreiro alonso, F; Modesto alapont, P; Soret medel, J; Garzon alama, E J; Gee, C N; Mccubbin, N A; Sankey, D; Emeliyanov, D; Dewhurst, A L; Houlden, M A; Klein, M; Burdin, S; Lehan, A K; Eisenhandler, E; Lloyd, S; Traynor, D P; Ibbotson, M; Marshall, R; Pater, J; Freestone, J; Masik, J; Haughton, I; Manousakis katsikakis, A; Sampsonidis, D; Krepouri, A; Roda, C; Sarri, F; Fukunaga, C; Nadtochiy, A; Kara, S O; Timm, S; Alam, S M; Rashid, T; Goldfarb, S; Espahbodi, S; Marley, D E; Rau, A W; Dos anjos, A R; Haque, S; Grau, N C; Havener, L B; Thomson, E J; Newcomer, F M; Hansl-kozanecki, G; Deberg, H A; Takeshita, T; Goggi, V; Ennis, J S; Olness, F I; Kama, S; Ordonez sanz, G; Koetsveld, F; Elamri, M; Mansoor-ul-islam, S; Lemmer, B; Kawamura, G; Bindi, M; Schulte, S; Kugel, A; Kretz, M P; Kurchaninov, L; Blanchot, G; Chromek-burckhart, D; Di girolamo, B; Francis, D; Gianotti, F; Nordberg, M Y; Pernegger, H; Roe, S; Boyd, J; Wilkens, H G; Pauly, T; Fabre, C; Tricoli, A; Bertet, D; Ruiz martinez, M A; Arnaez, O L; Lenzi, B; Boveia, A J; Gillberg, D I; Davies, J M; Zimmermann, R; Uhlenbrock, M; Kraus, J K; Narayan, R T; John, A; Dam, M; Padilla aranda, C; Bellachia, F; Le flour chollet, F M; Jezequel, S; Dumont dayot, N; Fede, E; Mathieu, M; Gensolen, F D; Alio, L; Arnault, C; Bouchel, M; Ducorps, A; Kado, M M; Lounis, A; Zhang, Z P; De vivie de regie, J; Beau, T; Bruni, A; Bruni, G; Grafstrom, P; Romano, M; Lasagni manghi, F; Massa, L; Shaw, K; Ikegami, Y; Tsuno, S; Kawanishi, Y; Benincasa, G; Blagov, M; Fedorchuk, R; Shatalov, P; Romaniouk, A; Belotskiy, K; Timoshenko, S; Hooft van huysduynen, L; Lewis, G H; Wittgen, M M; Mader, W F; Rudolph, C J; Gumpert, C; Mamuzic, J; Rudolph, G; Schmid, P; Corriveau, F; Belanger-champagne, C; Yarkoni, S; Leroy, C; Koffas, T; Harack, B D; Weber, M S; Beck, H; Leger, A; Gonzalez sevilla, S; Zhu, Y; Gao, J; Zhang, X; Blazek, T; Rames, J; Sicho, P; Kouba, T; Sluka, T; Lysak, R; Ristic, B; Kompatscher, A E; Von radziewski, H; Groll, M; Meyer, C P; Oberlack, H; Stonjek, S M; Cortiana, G; Werthenbach, U; Ibragimov, I; Czirr, H S; Cavalli-sforza, M; Puigdengoles olive, C; Tallada crespi, P; Marti i garcia, S; Gonzalez de la hoz, S; Guyot, C; Meyer, J; Schoeffel, L O; Garvey, J; Hawkes, C; Hillier, S J; Staley, R J; Salvatore, P F; Santoyo castillo, I; Carter, J; Yusuff, I B; Barlow, N R; Berry, T S; Savage, G; Wraight, K G; Steele, G E; Hughes, G; Walder, J W; Love, P A; Crone, G J; Waugh, B M; Boeser, S; Sarkar, A M; Holmes, A; Massey, R; Pinder, A; Nicholson, R; Korolkova, E; Katsoufis, I; Maltezos, S; Tsipolitis, G; Leontsinis, S; Levinson, L J; Shoa, M; Abramowicz, H E; Bella, G; Gershon, A; Urkovsky, E; Taiblum, N; Gatti, C; Della pietra, M; Lanza, A; Negri, A; Flaminio, V; Lacava, F; Petrolo, E; Pontecorvo, L; Rosati, S; Zanello, L; Pasqualucci, E; Di ciaccio, A; Giordani, M; Yamazaki, Y; Jinno, T; Nomachi, M; De jong, P J; Ferrari, P; Homma, J; Van der graaf, H; Igonkina, O B; Stugu, B S; Buanes, T; Pedersen, M; Turala, M; Olszewski, A J; Koperny, S Z; Onofre, A; Castro nunes fiolhais, M; Alexa, C; Cuciuc, C M; Akesson, T P A; Hellman, S L; Milstead, D A; Bondyakov, A; Pushnova, V; Budagov, Y; Minashvili, I; Romanov, V; Sniatkov, V; Tskhadadze, E; Kalinovskaya, L; Shalyugin, A; Tavkhelidze, A; Rumyantsev, L; Karpov, S; Soloshenko, A; Vostrikov, A; Borissov, E; Solodkov, A; Vorob'ev, A; Sidorov, S; Malyaev, V; Lee, S; Grudzinski, J J; Virzi, J S; Vahsen, S E; Lys, J; Penwell, J W; Yan, Z; Bernard, C S; Barreiro guimaraes da costa, J P; Oliver, J N; Merritt, F S; Brubaker, E M; Kapliy, A; Kim, J; Zutshi, V V; Burghgrave, B O; Abolins, M A; Arabidze, G; Caughron, S A; Frey, R E; Radloff, P T; Schernau, M; Murillo garcia, R; Porter, R A; Mccormick, C A; Karn, P J; Sliwa, K J; Demers konezny, S M; Strauss, M G; Mueller, J A; Izen, J M; Klimentov, A; Lynn, D; Polychronakos, V; Radeka, V; Sondericker, J I I I; Bathe, S; Duffin, S; Chen, H; De castro faria salgado, P E; Kersevan, B P; Lacker, H M; Schulz, H; Kubota, T; Tan, K G; Yabsley, B D; Nunes de moura junior, N; Pinfold, J; Soluk, R A; Ouellette, E A; Leitner, R; Sykora, T; Solar, M; Sartisohn, G; Hirschbuehl, D; Huning, D; Fischer, J; Terron cuadrado, J; Glasman kuguel, C B; Lacasta llacer, C; Lopez-amengual, J; Calvet, D; Chevaleyre, J; Daudon, F; Montarou, G; Guicheney, C; Calvet, S P J; Tyndel, M; Dervan, P J; Maxfield, S J; Hayward, H S; Beck, G; Cox, B; Da via, C; Paschalias, P; Manolopoulou, M; Ragusa, F; Cimino, D; Ezzi, M; Fiuza de barros, N F; Yildiz, H; Ciftci, A K; Turkoz, S; Zain, S B; Tegenfeldt, F; Chapman, J W; Panikashvili, N; Bocci, A; Altheimer, A D; Martin, F F; Fratina, S; Jackson, B D; Grillo, A A; Seiden, A; Watts, G T; Mangiameli, S; Johns, K A; O'grady, F T; Errede, D R; Darbo, G; Ferretto parodi, A; Leahu, M C; Farbin, A; Ye, J; Liu, T; Wijnen, T A; Naito, D; Takashima, R; Sandoval usme, C E; Zinonos, Z; Moreno llacer, M; Agricola, J B; Mcgovern, S A; Sakurai, Y; Trigger, I M; Qing, D; De silva, A S; Butin, F; Dell'acqua, A; Hawkings, R J; Lamanna, M; Mapelli, L; Passardi, G; Rembser, C; Tremblet, L; Andreazza, W; Dobos, D A; Koblitz, B; Bianco, M; Dimitrov, G V; Schlenker, S; Armbruster, A J; Rammensee, M C; Romao rodrigues, L F; Peters, K; Pozo astigarraga, M E; Yi, Y; Desch, K K; Huegging, F G; Muller, K K; Stillings, J A; Schaetzel, S; Xella, S; Hansen, J D; Colas, J; Daguin, G; Wingerter, I; Ionescu, G D; Ledroit, F; Lucotte, A; Clement, B E; Stark, J; Clemens, J; Djama, F; Knoops, E; Coadou, Y; Vigeolas-choury, E; Feligioni, L; Iconomidou-fayard, L; Imbert, P; Schaffer, A C; Nikolic, I; Trincaz-duvoid, S; Warin, P; Camard, A F; Ridel, M; Pires, S; Giacobbe, B; Spighi, R; Villa, M; Negrini, M; Sato, K; Gavrilenko, I; Akimov, A; Khovanskiy, V; Talyshev, A; Voronkov, A; Hakobyan, H; Mallik, U; Shibata, A; Konoplich, R; Barklow, T L; Koi, T; Straessner, A; Stelzer, B; Robertson, S H; Vachon, B; Stoebe, M; Keyes, R A; Wang, K; Billoud, T R V; Strickland, V; Batygov, M; Krieger, P; Palacino caviedes, G D; Gay, C W; Jiang, Y; Han, L; Liu, M; Zenis, T; Lokajicek, M; Staroba, P; Tasevsky, M; Popule, J; Svatos, M; Seifert, F; Landgraf, U; Lai, S T; Schmitt, K H; Achenbach, R; Schuh, N; Kiesling, C; Macchiolo, A; Nisius, R; Schacht, P; Von der schmitt, J G; Kortner, O; Atlay, N B; Segura sole, E; Grinstein, S; Neissner, C; Bruckner, D M; Oliver garcia, E; Boonekamp, M; Perrin, P; Gaillot, F M; Wilson, J A; Thomas, J P; Thompson, P D; Palmer, J D; Falk, I E; Chavez barajas, C A; Sutton, M R; Robinson, D; Kaneti, S A; Wu, T; Robson, A; Shaw, C; Buzatu, A; Qin, G; Jones, R; Bouhova-thacker, E V; Viehhauser, G; Weidberg, A R; Gilbert, L; Johansson, P D C; Orphanides, M; Vlachos, S; Behar harpaz, S; Papish, O; Lellouch, D J H; Turgeman, D; Benary, O; La rotonda, L; Vena, R; Tarasio, A; Marzano, F; Gabrielli, A; Di stante, L; Liberti, B; Aielli, G; Oda, S; Nozaki, M; Takeda, H; Hayakawa, T; Miyazaki, K; Maeda, J; Sugimoto, T; Pettersson, N E; Bentvelsen, S; Groenstege, H L; Lipniacka, A; Vahabi, M; Ould-saada, F; Chwastowski, J J; Hajduk, Z; Kaczmarska, A; Olszowska, J B; Trzupek, A; Staszewski, R P; Palka, M; Constantinescu, S; Jarlskog, G; Lundberg, B L A; Pearce, M; Ellert, M F; Bannikov, A; Fechtchenko, A; Iambourenko, V; Kukhtin, V; Pozdniakov, V; Topilin, N; Vorozhtsov, S; Khassanov, A; Fliaguine, V; Kharchenko, D; Nikolaev, K; Kotenov, K; Kozhin, A; Zenin, A; Ivashin, A; Golubkov, D; Beddall, A; Su, D; Dallapiccola, C J; Cranshaw, J M; Price, L; Stanek, R W; Gieraltowski, G; Zhang, J; Gilchriese, M; Shapiro, M; Ahlen, S; Morii, M; Taylor, F E; Miller, R J; Phillips, F H; Torrence, E C; Wheeler, S J; Benedict, B H; Napier, A; Hamilton, S F; Petrescu, T A; Boyd, G R J; Jayasinghe, A L; Smith, J M; Mc carthy, R L; Adams, D L; Le vine, M J; Zhao, X; Patwa, A M; Baker, M; Kirsch, L; Krstic, J; Simic, L; Filipcic, A; Seidel, S C; Cantore-cavalli, D; Baroncelli, A; Kind, O M; Scarcella, M J; Maidantchik, C L L; Seixas, J; Balabram filho, L E; Vorobel, V; Spousta, M; Strachota, P; Vokac, P; Slavicek, T; Bergmann, B L; Biebel, O; Kersten, S; Srinivasan, M; Trefzger, T; Vazeille, F; Insa, C; Kirk, J; Middleton, R; Burke, S; Klein, U; Morris, J D; Ellis, K V; Millward, L R; Giokaris, N; Ioannou, P; Angelidakis, S; Bouzakis, K; Andreazza, A; Perini, L; Chtcheguelski, V; Spiridenkov, E; Yilmaz, M; Kaya, U; Ernst, J; Mahmood, A; Saland, J; Kutnink, T; Holler, J; Kagan, H P; Wang, C; Pan, Y; Xu, N; Ji, H; Willis, W J; Tuts, P M; Litke, A; Wilder, M; Rothberg, J; Twomey, M S; Rizatdinova, F; Loch, P; Rutherfoord, J P; Varnes, E W; Barberis, D; Osculati-becchi, B; Brandt, A G; Turvey, A J; Benchekroun, D; Nagasaka, Y; Thanakornworakij, T; Quadt, A; Nadal serrano, J; Magradze, E; Nackenhorst, O; Musheghyan, H; Kareem, M; Chytka, L; Perez codina, E; Stelzer-chilton, O; Brunel, B; Henriques correia, A M; Dittus, F; Hatch, M; Haug, F; Hauschild, M; Huhtinen, M; Lichard, P; Schuh-erhard, S; Spigo, G; Avolio, G; Tsarouchas, C; Ahmad, I; Backes, M P; Barisits, M; Gadatsch, S; Cerv, M; Sicoe, A D; Nattamai sekar, L P; Fazio, D; Shan, L; Sun, X; Gaycken, G F; Hemperek, T; Petersen, T C; Alonso diaz, A; Moynot, M; Werlen, M; Hryn'ova, T; Gallin-martel, M; Wu, M; Touchard, F; Menouni, M; Fougeron, D; Le guirriec, E; Chollet, J C; Veillet, J; Barrillon, P; Prat, S; Krasny, M W; Roos, L; Boudarham, G; Lefebvre, G; Boscherini, D; Valentinetti, S; Acharya, B S; Miglioranzi, S; Kanzaki, J; Unno, Y; Yasu, Y; Iwasaki, H; Tokushuku, K; Maio, A; Rodrigues fernandes, B J; Pinto figueiredo raimundo ribeiro, N M; Bot, A; Shmeleva, A; Zaidan, R; Djilkibaev, R; Mincer, A I; Salnikov, A; Aracena, I A; Schwartzman, A G; Silverstein, D J; Fulsom, B G; Anulli, F; Kuhn, D; White, M J; Vetterli, M J; Stockton, M C; Mantifel, R L; Azuelos, G; Shoaleh saadi, D; Savard, P; Clark, A; Ferrere, D; Gaumer, O P; Diaz gutierrez, M A; Liu, Y; Dubnickova, A; Sykora, I; Strizenec, P; Weichert, J; Zitek, K; Naumann, T; Goessling, C; Klingenberg, R; Jakobs, K; Rurikova, Z; Werner, M W; Arnold, H R; Buscher, D; Hanke, P; Stamen, R; Dietzsch, T A; Kiryunin, A; Salihagic, D; Buchholz, P; Pacheco pages, A; Sushkov, S; Porto fernandez, M D C; Cruz josa, R; Vos, M A; Schwindling, J; Ponsot, P; Charignon, C; Kivernyk, O; Goodrick, M J; Hill, J C; Green, B J; Quarman, C V; Bates, R L; Allwood-spiers, S E; Quilty, D; Chilingarov, A; Long, R E; Barton, A E; Konstantinidis, N; Simmons, B; Davison, A R; Christodoulou, V; Wastie, R L; Gallas, E J; Cox, J; Dehchar, M; Behr, J K; Pickering, M A; Filippas, A; Panagoulias, I; Tenenbaum katan, Y D; Roth, I; Pitt, M; Citron, Z H; Benhammou, Y; Amram, N Y N; Soffer, A; Gorodeisky, R; Antonelli, M; Chiarella, V; Curatolo, M; Esposito, B; Nicoletti, G; Martini, A; Sansoni, A; Carlino, G; Del prete, T; Bini, C; Vari, R; Kuna, M; Pinamonti, M; Itoh, Y; Colijn, A P; Klous, S; Garitaonandia elejabarrieta, H; Rosendahl, P L; Taga, A V; Malecki, P; Malecki, P; Wolter, M W; Kowalski, T; Korcyl, G M; Caprini, M; Caprini, I; Dita, P; Olariu, A; Tudorache, A; Lytken, E; Hidvegi, A; Aliyev, M; Alexeev, G; Bardin, D; Kakurin, S; Lebedev, A; Golubykh, S; Chepurnov, V; Gostkin, M; Kolesnikov, V; Karpova, Z; Davkov, K I; Yeletskikh, I; Grishkevich, Y; Rud, V; Myagkov, A; Nikolaenko, V; Starchenko, E; Zaytsev, A; Fakhrutdinov, R; Cheine, I; Istin, S; Sahin, S; Teng, P; Chu, M L; Trilling, G H; Heinemann, B; Richoz, N; Degeorge, C; Youssef, S; Pilcher, J; Cheng, Y; Purohit, M V; Kravchenko, A; Calkins, R E; Blazey, G; Hauser, R; Koll, J D; Reinsch, A; Brost, E C; Allen, B W; Lankford, A J; Ciobotaru, M D; Slagle, K J; Haffa, B; Mann, A; Loginov, A; Cummings, J T; Loyal, J D; Skubic, P L; Boudreau, J F; Lee, B E; Redlinger, G; Wlodek, T; Carcassi, G; Sexton, K A; Yu, D; Deng, W; Metcalfe, J E; Panitkin, S; Sijacki, D; Mikuz, M; Kramberger, G; Tartarelli, G F; Farilla, A; Stanescu, C; Herrberg, R; Alconada verzini, M J; Brennan, A J; Varvell, K; Marroquim, F; Gomes, A A; Do amaral coutinho, Y; Gingrich, D; Moore, R W; Dolejsi, J; Valkar, S; Broz, J; Jindra, T; Kohout, Z; Kral, V; Mann, A W; Calfayan, P P; Langer, T; Hamacher, K; Sanny, B; Wagner, W; Flick, T; Redelbach, A R; Ke, Y; Higon-rodriguez, E; Donini, J N; Lafarguette, P; Adye, T J; Baines, J; Barnett, B; Wickens, F J; Martin, V J; Jackson, J N; Prichard, P; Kretzschmar, J; Martin, A J; Walker, C J; Potter, K M; Kourkoumelis, C; Tzamarias, S; Houiris, A G; Iliadis, D; Fanti, M; Bertolucci, F; Maleev, V; Sultanov, S; Rosenberg, E I; Krumnack, N E; Bieganek, C; Diehl, E B; Mc kee, S P; Eppig, A P; Harper, D R; Liu, C; Schwarz, T A; Mazor, B; Looper, K A; Wiedenmann, W; Huang, P; Stahlman, J M; Battaglia, M; Nielsen, J A; Zhao, T; Khanov, A; Kaushik, V S; Vichou, E; Liss, A M; Gemme, C; Morettini, P; Parodi, F; Passaggio, S; Rossi, L; Kuzhir, P; Ignatenko, A; Ferrari, R; Spairani, M; Pianori, E; Sekula, S J; Firan, A I; Cao, T; Hetherly, J W; Gouighri, M; Vassilakopoulos, V; Long, M C; Shimojima, M; Sawyer, L H; Brummett, R E; Losada, M A; Schorlemmer, A L; Mantoani, M; Bawa, H S; Mornacchi, G; Nicquevert, B; Palestini, S; Stapnes, S; Veness, R; Kotamaki, M J; Sorde, C; Iengo, P; Campana, S; Goossens, L; Zajacova, Z; Pribyl, L; Poveda torres, J; Marzin, A; Conti, G; Carrillo montoya, G D; Kroseberg, J; Gonella, L; Velz, T; Schmitt, S; Lobodzinska, E M; Lovschall-jensen, A E; Galster, G; Perrot, G; Cailles, M; Berger, N; Barnovska, Z; Delsart, P; Lleres, A; Tisserant, S; Grivaz, J; Matricon, P; Bellagamba, L; Bertin, A; Bruschi, M; De castro, S; Semprini cesari, N; Fabbri, L; Rinaldi, L; Quayle, W B; Truong, T N L; Kondo, T; Haruyama, T; Ng, C; Do valle wemans, A; Almeida veloso, F M; Konovalov, S; Ziegler, J M; Su, D; Lukas, W; Prince, S; Ortega urrego, E J; Teuscher, R J; Knecht, N; Pretzl, K; Borer, C; Gadomski, S; Koch, B; Kuleshov, S; Brooks, W K; Antos, J; Kulkova, I; Chudoba, J; Chyla, J; Tomasek, L; Bazalova, M; Messmer, I; Tobias, J; Sundermann, J E; Kuehn, S S; Kluge, E; Scharf, V L; Barillari, T; Kluth, S; Menke, S; Weigell, P; Schwegler, P; Ziolkowski, M; Casado lechuga, P M; Garcia, C; Sanchez, J; Costa mezquita, M J; Valero biot, J A; Laporte, J; Nikolaidou, R; Virchaux, M; Nguyen, V T H; Charlton, D; Harrison, K; Slater, M W; Newman, P R; Parker, A M; Ward, P; Mcgarvie, S A; Kilvington, G J; D'auria, S; O'shea, V; Mcglone, H M; Fox, H; Henderson, R; Kartvelishvili, V; Davies, B; Sherwood, P; Fraser, J T; Lancaster, M A; Tseng, J C; Hays, C P; Apolle, R; Dixon, S D; Parker, K A; Gazis, E; Papadopoulou, T; Panagiotopoulou, E; Karastathis, N; Hershenhorn, A D; Milov, A; Groth-jensen, J; Bilokon, H; Miscetti, S; Canale, V; Rebuzzi, D M; Capua, M; Bagnaia, P; De salvo, A; Gentile, S; Safai tehrani, F; Solfaroli camillocci, E; Sasao, N; Tsunada, K; Massaro, G; Magrath, C A; Van kesteren, Z; Beker, M G; Van den wollenberg, W; Bugge, L; Buran, T; Read, A L; Gjelsten, B K; Banas, E A; Turnau, J; Derendarz, D K; Kisielewska, D; Chesneanu, D; Rotaru, M; Maurer, J B; Wong, M L; Lund-jensen, B; Asman, B; Jon-and, K B; Silverstein, S B; Johansen, M; Alexandrov, I; Iatsounenko, I; Krumshteyn, Z; Peshekhonov, V; Rybaltchenko, K; Samoylov, V; Cheplakov, A; Kekelidze, G; Lyablin, M; Teterine, V; Bednyakov, V; Kruchonak, U; Shiyakova, M M; Demichev, M; Denisov, S P; Fenyuk, A; Djobava, T; Salukvadze, G; Cetin, S A; Brau, B P; Pais, P R; Proudfoot, J; Van gemmeren, P; Zhang, Q; Beringer, J A; Ely, R; Leggett, C; Pengg, F X; Barnett, M R; Quick, R E; Williams, S; Gardner jr, R W; Huston, J; Brock, R; Wanotayaroj, C; Unel, G N; Taffard, A C; Frate, M; Baker, K O; Tipton, P L; Hutchison, A; Walsh, B J; Norberg, S R; Su, J; Tsybyshev, D; Caballero bejar, J; Ernst, M U; Wellenstein, H; Vudragovic, D; Vidic, I; Gorelov, I V; Toms, K; Alimonti, G; Petrucci, F; Kolanoski, H; Smith, J; Jeng, G; Watson, I J; Guimaraes ferreira, F; Miranda vieira xavier, F; Araujo pereira, R; Poffenberger, P; Sopko, V; Elmsheuser, J; Wittkowski, J; Glitza, K; Gorfine, G W; Ferrer soria, A; Fuster verdu, J A; Sanchis lozano, A; Reinmuth, G; Busato, E; Haywood, S J; Mcmahon, S J; Qian, W; Villani, E G; Laycock, P J; Poll, A J; Rizvi, E S; Foster, J M; Loebinger, F; Forti, A; Plano, W G; Brown, G J A; Kordas, K; Vegni, G; Ohsugi, T; Iwata, Y; Cherkaoui el moursli, R; Sahin, M; Akyazi, E; Carlsen, A; Kanwal, B; Cochran jr, J H; Aronnax, M V; Lockner, M J; Zhou, B; Levin, D S; Weaverdyck, C J; Grom, G F; Rudge, A; Ebenstein, W L; Jia, B; Yamaoka, J; Jared, R C; Wu, S L; Banerjee, S; Lu, Q; Hughes, E W; Alkire, S P; Degenhardt, J D; Lipeles, E D; Spencer, E N; Savine, A; Cheu, E C; Lampl, W; Veatch, J R; Roberts, K; Atkinson, M J; Odino, G A; Polesello, G; Martin, T; White, A P; Stephens, R; Grinbaum sarkisyan, E; Vartapetian, A; Yu, J; Sosebee, M; Thilagar, P A; Spurlock, B; Bonde, R; Filthaut, F; Klok, P; Hoummada, A; Ouchrif, M; Pellegrini, G; Rafi tatjer, J M; Navarro, G A; Blumenschein, U; Weingarten, J C; Mueller, D; Graber, L; Gao, Y; Bode, A; Capeans garrido, M D M; Carli, T; Wells, P; Beltramello, O; Vuillermet, R; Dudarev, A; Salzburger, A; Torchiani, C I; Serfon, C L G; Sloper, J E; Duperrier, G; Lilova, P T; Knecht, M O; Lassnig, M; Anders, G; Deviveiros, P; Young, C; Sforza, F; Shaochen, C; Lu, F; Wermes, N; Wienemann, P; Schwindt, T; Hansen, P H; Hansen, J B; Pingel, A M; Massol, N; Elles, S L; Hallewell, G D; Rozanov, A; Vacavant, L; Fournier, D A; Poggioli, L; Puzo, P M; Tanaka, R; Escalier, M A; Makovec, N; Rezynkina, K; De cecco, S; Cavalleri, P G; Massa, I; Zoccoli, A; Tanaka, S; Odaka, S; Mitsui, S; Tomasio pina, J A; Santos, H F; Satsounkevitch, I; Harkusha, S; Baranov, S; Nechaeva, P; Kayumov, F; Kazanin, V; Asai, M; Mount, R P; Nelson, T K; Smith, D; Kenney, C J; Malone, C M; Kobel, M; Friedrich, F; Grohs, J P; Jais, W J; O'neil, D C; Warburton, A T; Vincter, M; Mccarthy, T G; Groer, L S; Pham, Q T; Taylor, W J; La marra, D; Perrin, E; Wu, X; Bell, W H; Delitzsch, C M; Feng, C; Zhu, C; Tokar, S; Bruncko, D; Kupco, A; Marcisovsky, M; Jakoubek, T; Bruneliere, R; Aktas, A; Narrias villar, D I; Tapprogge, S; Mattmann, J; Kroha, H; Crespo, J; Korolkov, I; Cavallaro, E; Cabrera urban, S; Mitsou, V; Kozanecki, W; Mansoulie, B; Pabot, Y; Etienvre, A; Bauer, F; Chevallier, F; Bouty, A R; Watkins, P; Watson, A; Faulkner, P J W; Curtis, C J; Murillo quijada, J A; Grout, Z J; Chapman, J D; Cowan, G D; George, S; Boisvert, V; Mcmahon, T R; Doyle, A T; Thompson, S A; Britton, D; Smizanska, M; Campanelli, M; Butterworth, J M; Loken, J; Renton, P; Barr, A J; Issever, C; Short, D; Crispin ortuzar, M; Tovey, D R; French, R; Rozen, Y; Alexander, G; Kreisel, A; Conventi, F; Raulo, A; Schioppa, M; Susinno, G; Tassi, E; Giagu, S; Luci, C; Nisati, A; Cobal, M; Ishikawa, A; Jinnouchi, O; Bos, K; Verkerke, W; Vermeulen, J; Van vulpen, I B; Kieft, G; Mora, K D; Olsen, F; Rohne, O M; Pajchel, K; Nilsen, J K; Wosiek, B K; Wozniak, K W; Badescu, E; Jinaru, A; Bohm, C; Johansson, E K; Sjoelin, J B R; Clement, C; Buszello, C P; Huseynova, D; Boyko, I; Popov, B; Poukhov, O; Vinogradov, V; Tsiareshka, P; Skvorodnev, N; Soldatov, A; Chuguev, A; Gushchin, V; Yazici, E; Lutz, M S; Malon, D; Vanyashin, A; Lavrijsen, W; Spieler, H; Biesiada, J L; Bahr, M; Kong, J; Tatarkhanov, M; Ogren, H; Van kooten, R J; Cwetanski, P; Butler, J M; Shank, J T; Chakraborty, D; Ermoline, I; Sinev, N; Whiteson, D O; Corso radu, A; Huang, J; Werth, M P; Kastoryano, M; Meirose da silva costa, B; Namasivayam, H; Hobbs, J D; Schamberger jr, R D; Guo, F; Potekhin, M; Popovic, D; Gorisek, A; Sokhrannyi, G; Hofsajer, I W; Mandelli, L; Ceradini, F; Graziani, E; Giorgi, F; Zur nedden, M E G; Grancagnolo, S; Volpi, M; Nunes hanninger, G; Rados, P K; Milesi, M; Cuthbert, C J; Black, C W; Fink grael, F; Fincke-keeler, M; Keeler, R; Kowalewski, R V; Berghaus, F O; Qi, M; Davidek, T; Tas, P; Jakubek, J; Duckeck, G; Walker, R; Mitterer, C A; Harenberg, T; Sandvoss, S A; Del peso, J; Llorente merino, J; Gonzalez millan, V; Irles quiles, A; Crouau, M; Gris, P L Y; Liauzu, S; Romano saez, S M; Gallop, B J; Jones, T J; Austin, N C; Morris, J; Duerdoth, I; Thompson, R J; Kelly, M P; Leisos, A; Garas, A; Pizio, C; Venda pinto, B A; Kudin, L; Qian, J; Wilson, A W; Mietlicki, D; Long, J D; Sang, Z; Arms, K E; Rahimi, A M; Moss, J J; Oh, S H; Parker, S I; Parsons, J; Cunitz, H; Vanguri, R S; Sadrozinski, H; Lockman, W S; Martinez-mc kinney, G; Goussiou, A; Jones, A; Lie, K; Hasegawa, Y; Olcese, M; Gilewsky, V; Harrison, P F; Janus, M; Spangenberg, M; De, K; Ozturk, N; Pal, A K; Darmora, S; Bullock, D J; Oviawe, O; Derkaoui, J E; Rahal, G; Sircar, A; Frey, A S; Stolte, P; Rosien, N; Zoch, K; Li, L; Schouten, D W; Catinaccio, A; Ciapetti, M; Delruelle, N; Ellis, N; Farthouat, P; Hoecker, A; Klioutchnikova, T; Macina, D; Malyukov, S; Spiwoks, R D; Unal, G P; Vandoni, G; Petersen, B A; Pommes, K; Nairz, A M; Wengler, T; Mladenov, D; Solans sanchez, C A; Lantzsch, K; Schmieden, K; Jakobsen, S; Ritsch, E; Sciuccati, A; Alves dos santos, A M; Ouyang, Q; Zhou, M; Brock, I C; Janssen, J; Katzy, J; Anders, C F; Nilsson, B S; Bazan, A; Di ciaccio, L; Yildizkaya, T; Collot, J; Malek, F; Trocme, B S; Breugnon, P; Godiot, S; Adam bourdarios, C; Coulon, J; Duflot, L; Petroff, P G; Zerwas, D; Lieuvin, M; Calderini, G; Laporte, D; Ocariz, J; Gabrielli, A; Ohska, T K; Kurochkin, Y; Kantserov, V; Vasilyeva, L; Speransky, M; Smirnov, S; Antonov, A; Bulekov, O; Tikhonov, Y; Sargsyan, L; Vardanyan, G; Budick, B; Kocian, M L; Luitz, S; Young, C C; Grenier, P J; Kelsey, M; Black, J E; Kneringer, E; Jussel, P; Horton, A J; Beaudry, J; Chandra, A; Ereditato, A; Topfel, C M; Mathieu, R; Bucci, F; Muenstermann, D; White, R M; He, M; Urban, J; Straka, M; Vrba, V; Schumacher, M; Parzefall, U; Mahboubi, K; Sommer, P O; Koepke, L H; Bethke, S; Moser, H; Wiesmann, M; Walkowiak, W A; Fleck, I J; Martinez-perez, M; Sanchez sanchez, C A; Jorgensen roca, S; Accion garcia, E; Sainz ruiz, C A; Valls ferrer, J A; Amoros vicente, G; Vives torrescasana, R; Ouraou, A; Formica, A; Hassani, S; Watson, M F; Cottin buracchio, G F; Bussey, P J; Saxon, D; Ferrando, J E; Collins-tooth, C L; Hall, D C; Cuhadar donszelmann, T; Dawson, I; Duxfield, R; Argyropoulos, T; Brodet, E; Livneh, R; Shougaev, K; Reinherz, E I; Guttman, N; Beretta, M M; Vilucchi, E; Aloisio, A; Patricelli, S; Caprio, M; Cevenini, F; De vecchi, C; Livan, M; Rimoldi, A; Vercesi, V; Ayad, R; Mastroberardino, A; Ciapetti, G; Luminari, L; Rescigno, M; Santonico, R; Salamon, A; Del papa, C; Kurashige, H; Homma, Y; Tomoto, M; Horii, Y; Sugaya, Y; Hanagaki, K; Bobbink, G; Kluit, P M; Koffeman, E N; Van eijk, B; Lee, H; Eigen, G; Dorholt, O; Strandlie, A; Strzempek, P B; Dita, S; Stoicea, G; Chitan, A; Leven, S S; Moa, T; Brenner, R; Ekelof, T J C; Olshevskiy, A; Roumiantsev, V; Chlachidze, G; Zimine, N; Gusakov, Y; Grigalashvili, N; Mineev, M; Potrap, I; Barashkou, A; Shoukavy, D; Shaykhatdenov, B; Pikelner, A; Gladilin, L; Ammosov, V; Abramov, A; Arik, M; Sahinsoy, M; Uysal, Z; Azizi, K; Hotinli, S C; Zhou, S; Berger, E; Blair, R; Underwood, D G; Einsweiler, K; Garcia-sciveres, M A; Siegrist, J L; Kipnis, I; Dahl, O; Holland, S; Barbaro galtieri, A; Smith, P T; Parua, N; Franklin, M; Mercurio, K M; Tong, B; Pod, E; Cole, S G; Hopkins, W H; Guest, D H; Severini, H; Marsicano, J J; Abbott, B K; Wang, Q; Lissauer, D; Ma, H; Takai, H; Rajagopalan, S; Protopopescu, S D; Snyder, S S; Undrus, A; Popescu, R N; Begel, M A; Blocker, C A; Amelung, C; Mandic, I; Macek, B; Tucker, B H; Citterio, M; Troncon, C; Orestano, D; Taccini, C; Romeo, G L; Dova, M T; Taylor, G N; Gesualdi manhaes, A; Mcpherson, R A; Sobie, R; Taylor, R P; Dolezal, Z; Kodys, P; Slovak, R; Sopko, B; Vacek, V; Sanders, M P; Hertenberger, R; Meineck, C; Becks, K; Kind, P; Sandhoff, M; Cantero garcia, J; De la torre perez, H; Castillo gimenez, V; Ros, E; Hernandez jimenez, Y; Chadelas, R; Santoni, C; Washbrook, A J; O'brien, B J; Wynne, B M; Mehta, A; Vossebeld, J H; Landon, M; Teixeira dias castanheira, M; Cerrito, L; Keates, J R; Fassouliotis, D; Chardalas, M; Manousos, A; Grachev, V; Seliverstov, D; Sedykh, E; Cakir, O; Ciftci, R; Edson, W; Prell, S A; Rosati, M; Stroman, T; Jiang, H; Neal, H A; Li, X; Gan, K K; Smith, D S; Kruse, M C; Ko, B R; Leung fook cheong, A M; Cole, B; Angerami, A R; Greene, Z S; Kroll, J I; Van berg, R P; Forbush, D A; Lubatti, H; Raisher, J; Shupe, M A; Wolin, S; Oshita, H; Gaudio, G; Das, R; Konig, A C; Croft, V A; Harvey, A; Maaroufi, F; Melo, I; Greenwood jr, Z D; Shabalina, E; Mchedlidze, G; Drechsler, E; Rieger, J K; Blackston, M; Colombo, T

    2002-01-01

    % ATLAS \\\\ \\\\ ATLAS is a general-purpose experiment for recording proton-proton collisions at LHC. The ATLAS collaboration consists of 144 participating institutions (June 1998) with more than 1750~physicists and engineers (700 from non-Member States). The detector design has been optimized to cover the largest possible range of LHC physics: searches for Higgs bosons and alternative schemes for the spontaneous symmetry-breaking mechanism; searches for supersymmetric particles, new gauge bosons, leptoquarks, and quark and lepton compositeness indicating extensions to the Standard Model and new physics beyond it; studies of the origin of CP violation via high-precision measurements of CP-violating B-decays; high-precision measurements of the third quark family such as the top-quark mass and decay properties, rare decays of B-hadrons, spectroscopy of rare B-hadrons, and $ B ^0 _{s} $-mixing. \\\\ \\\\The ATLAS dectector, shown in the Figure includes an inner tracking detector inside a 2~T~solenoid providing an axial...

  19. Supporting ATLAS

    CERN Multimedia

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator. The installation of the feet is scheduled to finish during January 2004 with an installation precision at the 1 mm level despite their height of 5.3 metres. The manufacture was carried out in Russia (Company Izhorskiye Zavody in St. Petersburg), as part of a Russian and JINR Dubna in-kind contribution to ATLAS. Involved in the installation is a team from IHEP-Protvino (Russia), the ATLAS technical co-ordination team at CERN, and the CERN survey team. In all, about 15 people are involved. After the feet are in place, the barrel toroid magnet and the barrel calorimeters will be installed. This will keep the ATLAS team busy for the entire year 2004.

  20. 17 April 2008 - Head of Internal Audit Network meeting visiting the ATLAS experimental area with CERN ATLAS Team Leader P. Fassnacht, ATLAS Technical Coordinator M. Nessi and ATLAS Resources Manager M. Nordberg.

    CERN Multimedia

    Mona Schweizer

    2008-01-01

    17 April 2008 - Head of Internal Audit Network meeting visiting the ATLAS experimental area with CERN ATLAS Team Leader P. Fassnacht, ATLAS Technical Coordinator M. Nessi and ATLAS Resources Manager M. Nordberg.

  1. The high-precision x-ray tomograph for quality control of the ATLAS MDT muon spectrometer

    CERN Document Server

    Drakoulakos, D G; Maugain, J M; Rohrbach, F; Sedykh, Yu

    1997-01-01

    For the Large Hadron Collider (LHC) of the next millennium, a large general-purpose high-energy physics experiment, the ATLAS project, is being designed by a world-wide collaboration. One of its detectors, the ATLAS muon tracking detector, the MDT project, is on the scale of a very large industrial project: the design, the construction and assembly of twelve hundred large muon drift chambers are aimed at producing an exceptional quality in terms of accuracy, material reliability, assembly, and monitoring. This detector, based on the concept of very high mechanical precision required by the physics goals, will use tomography as a quality control platform. An X-ray tomograph prototype, monitored by a set of interferometers, has been developed at CERN to provide high-quality control of the MDT chambers which will be built in the collaborating institutes of the ATLAS project. First results have been obtained on MDT prototypes showing the validity of the X-ray tomograph approach for mechanical control of the detec...

  2. Monitoring ATLAS L1 CTP data from P-BEAST

    CERN Document Server

    Roggel, Jens

    2017-01-01

    The ATLAS Level-1 Central Trigger Processor combines information from the calori-meters and the muon detectors and takes a decision to accept an event based on a list of selection criteria (trigger items). Busy signals from the detectors and generated dead time by the Central Trigger Processor prevents the buffers to become full. The visualisation of this data is useful to check the functionality of the system. My project during the CERN summer student programme was to develop an application, which produces plots of relevant Central Trigger Processor data and presents the results in an appropriate format for experts and users.

  3. Cylinder monitoring program

    Energy Technology Data Exchange (ETDEWEB)

    Alderson, J.H. [Martin Marietta Energy Systems, Inc., Paducah, KY (United States)

    1991-12-31

    Cylinders containing depleted uranium hexafluoride (UF{sub 6}) in storage at the Department of Energy (DOE) gaseous diffusion plants, managed by Martin Marietta Energy Systems, Inc., are being evaluated to determine their expected storage life. Cylinders evaluated recently have been in storage service for 30 to 40 years. In the present environment, the remaining life for these storage cylinders is estimated to be 30 years or greater. The group of cylinders involved in recent tests will continue to be monitored on a periodic basis, and other storage cylinders will be observed as on a statistical sample population. The program has been extended to all types of large capacity UF{sub 6} cylinders.

  4. gFEX, the ATLAS Calorimeter Global Feature Extractor

    CERN Document Server

    Takai, Helio; The ATLAS collaboration; Chen, Hucheng

    2015-01-01

    The global feature extractor (gFEX) is a component of the Level-1 Calorimeter trigger Phase-I upgrade for the ATLAS experiment. It is intended to identify patterns of energy associated with the hadronic decays of high momentum Higgs, W, & Z bosons, top quarks, and exotic particles in real time at the LHC crossing rate. The single processor board will be implemented as a fast reconfigurable processor based on four large FPGAs. The board will receive coarse-granularity information from all the ATLAS calorimeters on 264 optical fibers with the data transferred at the 40 MHz LHC clock frequency. The gFEX will be controlled by a single system-on-chip processor, ZYNQ, that will be used to configure FPGAs, monitor board health, and interface to external signals. Although the board is being designed specifically for the ATLAS experiment, it is sufficiently generic that it could be used for fast data processing at other HEP or NP experiments. We will present the design of the gFEX board and discuss how it is being...

  5. The Savannah River Site's groundwater monitoring program

    International Nuclear Information System (INIS)

    1991-01-01

    The Environmental Protection Department/Environmental Monitoring Section (EPD/EMS) administers the Savannah River Site's (SRS) Groundwater Monitoring Program. During third quarter 1990 (July through September) EPD/EMS conducted routine sampling of monitoring wells and drinking water locations. EPD/EMS established two sets of flagging criteria in 1986 to assist in the management of sample results. The flagging criteria do not define contamination levels; instead they aid personnel in sample scheduling, interpretation of data, and trend identification. The flagging criteria are based on detection limits, background levels in SRS groundwater, and drinking water standards. All analytical results from third quarter 1990 are listed in this report, which is distributed to all site custodians. One or more analytes exceeded Flag 2 in 87 monitoring well series. Analytes exceeded Flat 2 for the first since 1984 in 14 monitoring well series. In addition to groundwater monitoring, EPD/EMS collected drinking water samples from SRS drinking water systems supplied by wells. The drinking water samples were analyzed for radioactive constituents

  6. Heavy Flavour Production and Properties at CMS and ATLAS

    CERN Document Server

    Barton, Adam Edward; The ATLAS collaboration

    2018-01-01

    Measurements of heavy flavour properties and production are an important part of the physics program of the ATLAS and CMS experiments at LHC. They can potentially expose physics beyond the standard model, constrain supersymmetry and advance hadron spectroscopy and test QCD. In the past years, the two collaborations have published results in several different fields, such as rare decays, searches for new states, CP and P violation and quarkonia polarisation. In this note, some of the most recent results from ATLAS and CMS are summarised.

  7. Heavy Flavour Production and Properties at ATLAS and CMS

    CERN Document Server

    Barton, Adam Edward; The ATLAS collaboration

    2017-01-01

    Measurements of heavy flavour properties and production are an important part of the physics program of the ATLAS and CMS experiments at LHC. They can potentially expose physics beyond the standard model, constrain supersymmetry and advance hadron spectroscopy and test QCD. In the past years, the two collaborations have published results in several different fields, such as rare decays, searches for new states, CP and P violation and quarkonia polarization. In this note, some of the most recent results from ATLAS and CMS are summarized

  8. 10 CFR 600.341 - Monitoring and reporting program and financial performance.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Monitoring and reporting program and financial performance. 600.341 Section 600.341 Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS FINANCIAL... Organizations Post-Award Requirements § 600.341 Monitoring and reporting program and financial performance. (a...

  9. Characterizing, managing and monitoring the networks for the ATLAS data acquisition system

    CERN Document Server

    AUTHOR|(CDS)2068860

    2007-01-01

    Particle physics studies the constituents of matter and the interactions between them. Many of the elementary particles do not exist under normal circumstances in nature. However, they can be created and detected during energetic collisions of other particles, as is done in particle accelerators. The Large Hadron Collider (LHC) being built at CERN will be the world's largest circular particle accelerator, colliding protons at energies of 14 TeV. Only a very small fraction of the interactions will give raise to interesting phenomena. The collisions produced inside the accelerator are studied using particle detectors. ATLAS is one of the detectors built around the LHC accelerator ring. During its operation, it will generate a data stream of 64 Terabytes/s. A Trigger and Data Acquisition System (TDAQ) is connected to ATLAS -- its function is to acquire digitized data from the detector and apply trigger algorithms to identify the interesting events. Achieving this requires the power of over 2000 computers plus an...

  10. Fast track segment finding in the Monitored Drift Tubes of the ATLAS Muon Spectrometer using a Legendre transform algorithm

    CERN Document Server

    Ntekas, Konstantinos; The ATLAS collaboration

    2018-01-01

    The upgrade of the ATLAS first-level muon trigger for High- Luminosity LHC foresees incorporating the precise tracking of the Monitored Drift Tubes in the current system based on Resistive Plate Chambers and Thin Gap Chambers to improve the accuracy in the transverse momentum measurement and control the single muon trigger rate by suppressing low quality fake triggers. The core of the MDT trigger algorithm is the segment identification and reconstruction which is performed per MDT chamber. The reconstructed segment positions and directions are then combined to extract the muon candidate’s transverse momentum. A fast pattern recognition segment finding algorithm, called the Legendre transform, is proposed to be used for the MDT trigger, implemented in a FPGA housed on a ATCA blade.

  11. Yucca Mountain Biological Resources Monitoring Program; Annual report, FY91

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-01-01

    The US Department of Energy (DOE) is required by the Nuclear Waste Policy Act of 1982 (as amended in 1987) to study and characterize Yucca Mountain as a possible site for a geologic repository for high-level nuclear waste. During site characterization, the DOE will conduct a variety of geotechnical, geochemical, geological, and hydrological studies to determine the suitability of Yucca Mountain as a repository. To ensure that site characterization activities (SCA) do not adversely affect the Yucca Mountain area, an environmental program has been implemented to monitor and mitigate potential impacts and to ensure that activities comply with applicable environmental regulations. This report describes the activities and accomplishments during fiscal year 1991 (FY91) for six program areas within the Terrestrial Ecosystem component of the YMP environmental program. The six program areas are Site Characterization Activities Effects, Desert Tortoises, Habitat Reclamation, Monitoring and Mitigation, Radiological Monitoring, and Biological Support.

  12. Wind Atlas for South Africa (WASA) Observational wind atlas for 10 met. stations in Northern, Western and Eastern Cape provinces

    DEFF Research Database (Denmark)

    Mortensen, Niels Gylling; Hansen, Jens Carsten; Kelly, Mark C.

    As part of the “Wind Atlas for South Africa” project, microscale modelling has been carried out for 10 meteorological stations in Northern, Western and Eastern Cape provinces. Wind speed and direction data from the ten 60-m masts have been analysed using the Wind Atlas Analysis and Application...... Program (WAsP 11). The windclimatological inputs are the observed wind climates derived from the WAsP Climate Analyst. Topographical inputs are elevation maps constructed from SRTM 3 data and rough-ness length maps constructed from SWBD data and Google Earth satellite imagery. Summaries are given...... of the data measured at the 10 masts, mainly for a 3-year reference period from October 2010 to September 2013. The main result of the microscale modelling is observational wind atlas data sets, which can be used for verification of the mesoscale modelling. In addition, the microscale modelling itself has...

  13. Wind Atlas for South Africa (WASA) Observational wind atlas for 10 met. stations in Northern, Western and Eastern Cape provinces

    DEFF Research Database (Denmark)

    Mortensen, Niels Gylling; Hansen, Jens Carsten; Kelly, Mark C.

    As part of the “Wind Atlas for South Africa” project, microscale modelling has been carried out for 10 meteorological stations in Northern, Western and Eastern Cape provinces. Wind speed and direction data from the ten 60-m masts have been analysed using the Wind Atlas Analysis and Application...... Program (WAsP 11). The wind-climatological inputs are the observed wind climates derived from the WAsP Climate Analyst. Topographical inputs are elevation maps constructed from SRTM 3 data and roughness length maps constructed from SWBD data and Google Earth satellite imagery. Summaries are given...... of the data measured at the 10 masts, mainly for a 3-year reference period from October 2010 to September 2013. The main result of the microscale modelling is observational wind atlas data sets, which can be used for verification of the mesoscale modelling. In addition, the microscale modelling itself has...

  14. Constraining Dark Matter with ATLAS

    CERN Document Server

    Czodrowski, Patrick; The ATLAS collaboration

    2017-01-01

    The presence of a non-baryonic dark matter component in the Universe is inferred from the observation of its gravitational interaction. If dark matter interacts weakly with the Standard Model it would be produced at the LHC, escaping the detector and leaving a large missing transverse momentum as their signature. The ATLAS detector has developed a broad and systematic search program for dark matter production in LHC collisions. The results of these searches on the first 13 TeV data, their interpretation, and the design and possible evolution of the search program will be presented.

  15. Heavy Ion Physics Prospects with the ATLAS Detector at the LHC

    CERN Document Server

    Grau, N

    2008-01-01

    The next great energy frontier in Relativistic Heavy Ion Collisions is quickly approaching with the completion of the Large Hadron Collider and the ATLAS experiment is poised to make important contributions in understanding QCD matter at extreme conditions. While designed for high-pT measurements in high-energy p+p collisions, the detector is well suited to study many aspects of heavy ion collisions from bulk phenomena to high-pT and heavy flavor physics. With its large and finely segmented electromagnetic and hadronic calorimeters, the ATLAS detector excels in measurements of photons and jets, observables of great interest at the LHC. In this talk, we highlight the performance of the ATLAS detector for Pb+Pb collisions at the LHC with special emphasis on a key feature of the ATLAS physics program: jet and direct photon measurements.

  16. Computer-aided performance monitoring program at Diablo Canyon

    International Nuclear Information System (INIS)

    Nelson, T.; Glynn, R. III; Kessler, T.C.

    1992-01-01

    This paper describes the thermal performance monitoring program at Pacific Gas ampersand Electric Company's (PG ampersand E's) Diablo Canyon Nuclear Power Plant. The plant performance monitoring program at Diablo Canyon uses the THERMAC performance monitoring and analysis computer software provided by Expert-EASE Systems. THERMAC is used to collect performance data from the plant process computers, condition that data to adjust for measurement errors and missing data points, evaluate cycle and component-level performance, archive the data for trend analysis and generate performance reports. The current status of the program is that, after a fair amount of open-quotes tuningclose quotes of the basic open-quotes thermal kitclose quotes models provided with the initial THERMAC installation, we have successfully baselined both units to cycle isolation test data from previous reload cycles. Over the course of the past few months, we have accumulated enough data to generate meaningful performance trends and, as a result, have been able to use THERMAC to track a condenser fouling problem that was costing enough megawatts to attract corporate-level attention. Trends from THERMAC clearly related the megawatt loss to a steadily degrading condenser cleanliness factor and verified the subsequent gain in megawatts after the condenser was cleaned. In the future, we expect to rebaseline THERMAC to a beginning of cycle (BOC) data set and to use the program to help track feedwater nozzle fouling

  17. Probabilistic liver atlas construction.

    Science.gov (United States)

    Dura, Esther; Domingo, Juan; Ayala, Guillermo; Marti-Bonmati, Luis; Goceri, E

    2017-01-13

    Anatomical atlases are 3D volumes or shapes representing an organ or structure of the human body. They contain either the prototypical shape of the object of interest together with other shapes representing its statistical variations (statistical atlas) or a probability map of belonging to the object (probabilistic atlas). Probabilistic atlases are mostly built with simple estimations only involving the data at each spatial location. A new method for probabilistic atlas construction that uses a generalized linear model is proposed. This method aims to improve the estimation of the probability to be covered by the liver. Furthermore, all methods to build an atlas involve previous coregistration of the sample of shapes available. The influence of the geometrical transformation adopted for registration in the quality of the final atlas has not been sufficiently investigated. The ability of an atlas to adapt to a new case is one of the most important quality criteria that should be taken into account. The presented experiments show that some methods for atlas construction are severely affected by the previous coregistration step. We show the good performance of the new approach. Furthermore, results suggest that extremely flexible registration methods are not always beneficial, since they can reduce the variability of the atlas and hence its ability to give sensible values of probability when used as an aid in segmentation of new cases.

  18. The use of safeguards data for process monitoring in the Advanced Test Line for Actinide Separations

    International Nuclear Information System (INIS)

    Barnes, J.W.; Yarbro, S.L.

    1987-01-01

    Los Alamos is constructing an integrated process monitoring/materials control and accounting (PM/MC and A) system in the Advanced Testing Line for Actinide Separations (ATLAS) at the Los Alamos Plutonium Facility. The ATLAS will test and demonstrate new methods for aqueous processing of plutonium. The ATLAS will also develop, test, and demonstrate the concepts for integrated process monitoring/materials control and accounting. We describe how this integrated PM/MC and A system will function and provide benefits to both process research and materials accounting personnel

  19. Construction and test of sMDT chambers for the ATLAS muon spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Takasugi, Eric; Schmidt-Sommerfeld, Korbinian; Kortner, Oliver; Kroha, Hubert [Max-Planck-Institut fuer Physik, Muenchen (Germany)

    2016-07-01

    In the ATLAS muon spectrometer, Monitored Drift Tube chambers (MDTs) are used for precise tracking measurements. In order to increase the geometric acceptance and rate capability, new chambers have been designed and are under construction to be installed in ATLAS during the winter shutdown of 2016/17 of the LHC. The new chambers have a drift tube diameter of 15 mm (compared to 30 mm of the other MDTs) and are therefore called sMDT chambers. This presentation reports on the progress of chamber construction and on the results of quality assurance tests.

  20. Ageing test of the ATLAS RPCs at X5-GIF

    International Nuclear Information System (INIS)

    Aielli, G.; Alviggi, M.; Ammosov, V.

    2004-01-01

    An ageing test of three ATLAS production RPC stations is in course at X5-GIF, the CERN irradiation facility. The chamber efficiencies are monitored using cosmic rays triggered by a scintillator hodoscope. Higher statistics measurements are made when the X5 muon beam is available. We report here the measurements of the efficiency versus operating voltage at different source intensities, up to a maximum counting rate of about 700 Hz/cm 2 . We describe the performance of the chambers during the test up to an overall ageing of 4 ATLAS equivalent years corresponding to an integrated charge of 0.12 C/cm 2 , including a safety factor of 5

  1. Yucca Mountain biological resources monitoring program; Annual report FY92

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-02-01

    The US Department of Energy (DOE) is required by the Nuclear Waste Policy Act of 1982 (as amended in 1987) to study and characterize Yucca Mountain as a potential site for a geologic repository for high-level nuclear waste. During site characterization, the DOE will conduct a variety of geotechnical, geochemical, geological, and hydrological studies to determine the suitability of Yucca Mountain as a potential repository. To ensure that site characterization activities (SCA) do not adversely affect the environment at Yucca Mountain, an environmental program has been implemented to monitor and mitigate potential impacts and ensure activities comply with applicable environmental regulations. This report describes the activities and accomplishments of EG&G Energy Measurements, Inc. (EG&G/EM) during fiscal year 1992 (FY92) for six program areas within the Terrestrial Ecosystem component of the YMP environmental program. The six program areas are Site Characterization Effects, Desert Tortoises, Habitat Reclamation, Monitoring and Mitigation, Radiological Monitoring, and Biological Support.

  2. Monitoring underground movements

    CERN Multimedia

    Antonella Del Rosso

    2015-01-01

    On 16 September 2015 at 22:54:33 (UTC), an 8.3-magnitude earthquake struck off the coast of Chile. 11,650 km away, at CERN, a new-generation instrument – the Precision Laser Inclinometer (PLI) – recorded the extreme event. The PLI is being tested by a JINR/CERN/ATLAS team to measure the movements of underground structures and detectors.   The Precision Laser Inclinometer during assembly. The instrument has proven very accurate when taking measurements of the movements of underground structures at CERN.    The Precision Laser Inclinometer is an extremely sensitive device capable of monitoring ground angular oscillations in a frequency range of 0.001-1 Hz with a precision of 10-10 rad/Hz1/2. The instrument is currently installed in one of the old ISR transfer tunnels (TT1) built in 1970. However, its final destination could be the ATLAS cavern, where it would measure and monitor the fine movements of the underground structures, which can affect the precise posi...

  3. The simulation for the ATLAS experiment Present status and outlook

    CERN Document Server

    Rimoldi, A; Gallas, M; Nairz, A; Boudreau, J; Tsulaia, V; Costanzo, D

    2004-01-01

    The simulation program for the ATLAS experiment is presently operational in a full OO environment. This important physics application has been successfully integrated into ATLAS's common analysis framework, ATHENA. In the last year, following a well stated strategy of transition from a GEANT3 to a GEANT4-based simulation, a careful validation programme confirmed the reliability, performance and robustness of this new tool, as well as its consistency with the results of previous simulation. Generation, simulation and digitization steps on different sets of full physics events we retested for performance. The same software used to simulate the full the ATLAS detector is also used with testbeam configurations. Comparisons to real data in the testbeam validate both the detector description and the physics processes within each subcomponent. In this paper we present the current status of ATLAS GEANT4 simulation, describe the functionality tests performed during its validation phase, and the experience with distrib...

  4. Graphic overview system for DOE's effluent and environmental monitoring programs

    International Nuclear Information System (INIS)

    Burson, Z.G.; Elle, D.R.

    1980-03-01

    The Graphic Overview System is a compilation of photos, maps, overlays, and summary information of environmental programs and related data for each DOE site. The information consists of liquid and airborne effluent release points, on-site storage locations, monitoring locations, aerial survey results, population distributions, wind roses, and other related information. The relationships of different environmental programs are visualized through the use of colored overlays. Trends in monitoring data, effluent releases, and on-site storage data are also provided as a corollary to the graphic display of monitoring and release points. The results provide a working tool with which DOE management (headquarters and field offices) can place in proper perspective key aspects of all environmental programs and related data, and the resulting public impact of each DOE site

  5. Physics potential of ATLAS upgrades at HL-LHC

    CERN Document Server

    Testa, Marianna; The ATLAS collaboration

    2017-01-01

    The High Luminosity-Large Hadron Collider (HL-LHC) is expected to start in 2026 and to pro- vide an integrated luminosity of 3000 fb−1 in ten years, a factor 10 more than what will be collected by 2023. This high statistics will allow ATLAS to perform precise measurements in the Higgs sector and improve searches for new physics at the TeV scale. The luminosity needed is L ∼ 7.51034 cm−2 s−1, corresponding to ∼200 additional proton-proton pile- up interactions. To face such harsh environment some sub-detectors of the ATLAS experiment will be upgraded or completely substituted. The performances of the new or upgraded ATLAS sub-detectors are presented, focusing in particular on the new inner tracker and a proposed high granularity time device. The impact of those upgrades on crucial physics measurements for HL-LHC program is also shown.

  6. Early operational experience with uranium beams at ATLAS

    International Nuclear Information System (INIS)

    Pardo, R.C.; Nolen, J.A.; Specht, J.R.

    1994-01-01

    The first acceleration of a uranium beam using the new ATLAS Positive Ion Injector(PII) took place on July 27, 1992. Since that first run, ATLAS and PII have completely achieved the design goals of the project and now provide high-current heavy-ion beams with energies beyond the Coulomb barrier for the research program. ATLAS routinely and reliably provides low-emittance beams of uranium and other very high-mass ions at energies in excess of 6 MeV/n with available on-target beam intensities exceeding 5 particle nA. The expectation that the beam quality for heavy beams would be significantly better than that of the tandem injector has been fully realized. The longitudinal emittance of beams from the PII is typically one-third that of similar beams from the tandem injector. In the past year ATLAS provided uranium beams for approximately 19% of the total research beam time, while beams with A≥100 were used 33% of the time. The system performance and techniques developed which made for this successful result will be discussed. Improvement projects underway will be presented and future goals described

  7. The Westinghouse Hanford Company Operational Environmental Monitoring Program CY-93

    International Nuclear Information System (INIS)

    Schmidt, J.W.

    1993-10-01

    The Operational Environmental Monitoring Program (OEMP) provides facility-specific environmental monitoring to protect the environment adjacent to facilities under the responsibility of Westinghouse Hanford Company (WHC) and assure compliance with WHC requirements and local, state, and federal environmental regulations. The objectives of the OEMP are to evaluate: compliance with federal (DOE, EPA), state, and internal WHC environmental radiation protection requirements and guides; performance of radioactive waste confinement systems; and trends of radioactive materials in the environment at and adjacent to nuclear facilities and waste disposal sites. This paper identifies the monitoring responsibilities and current program status for each area of responsibility

  8. ATLAS Future Plans: Upgrade and the Physics with High Luminosity

    Directory of Open Access Journals (Sweden)

    Rajagopalan S.

    2013-05-01

    Full Text Available The ATLAS experiment is planning a series of detector upgrades to cope with the planned increases in instantaneous luminosity and multiple interactions per crossing to maintain its physics capabilities. During the coming decade, the Large Hadron Collider will collide protons on protons at a center of mass energy up to 14 TeV with luminosities steadily increasing in a phased approach to over 5 × 1034 cm−2s−1. The resulting large data sets will significantly enhance the physics reach of the ATLAS detector building on the recent discovery of the Higgs-like boson. The planned detector upgrades being designed to cope with the increasing luminosity and its impact on the ATLAS physics program will be discussed.

  9. EnviroAtlas

    Data.gov (United States)

    City and County of Durham, North Carolina — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web...

  10. Integration of structural health monitoring solutions onto commercial aircraft via the Federal Aviation Administration structural health monitoring research program

    Science.gov (United States)

    Swindell, Paul; Doyle, Jon; Roach, Dennis

    2017-02-01

    The Federal Aviation Administration (FAA) started a research program in structural health monitoring (SHM) in 2011. The program's goal was to understand the technical gaps of implementing SHM on commercial aircraft and the potential effects on FAA regulations and guidance. The program evolved into a demonstration program consisting of a team from Sandia National Labs Airworthiness Assurance NDI Center (AANC), the Boeing Corporation, Delta Air Lines, Structural Monitoring Systems (SMS), Anodyne Electronics Manufacturing Corp (AEM) and the FAA. This paper will discuss the program from the selection of the inspection problem, the SHM system (Comparative Vacuum Monitoring-CVM) that was selected as the inspection solution and the testing completed to provide sufficient data to gain the first approved use of an SHM system for routine maintenance on commercial US aircraft.

  11. A Roadmap to Continuous Integration for ATLAS Software Development

    Science.gov (United States)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration

    2017-10-01

    The ATLAS software infrastructure facilitates efforts of more than 1000 developers working on the code base of 2200 packages with 4 million lines of C++ and 1.4 million lines of python code. The ATLAS offline code management system is the powerful, flexible framework for processing new package versions requests, probing code changes in the Nightly Build System, migration to new platforms and compilers, deployment of production releases for worldwide access and supporting physicists with tools and interfaces for efficient software use. It maintains multi-stream, parallel development environment with about 70 multi-platform branches of nightly releases and provides vast opportunities for testing new packages, for verifying patches to existing software and for migrating to new platforms and compilers. The system evolution is currently aimed on the adoption of modern continuous integration (CI) practices focused on building nightly releases early and often, with rigorous unit and integration testing. This paper describes the CI incorporation program for the ATLAS software infrastructure. It brings modern open source tools such as Jenkins and GitLab into the ATLAS Nightly System, rationalizes hardware resource allocation and administrative operations, provides improved feedback and means to fix broken builds promptly for developers. Once adopted, ATLAS CI practices will improve and accelerate innovation cycles and result in increased confidence in new software deployments. The paper reports the status of Jenkins integration with the ATLAS Nightly System as well as short and long term plans for the incorporation of CI practices.

  12. Online radiation dose measurement system for ATLAS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Mandic, I.; Cindro, V.; Dolenc, I.; Gorisek, A.; Kramberger, G. [Jozef Stefan Institute, Jamova 39, Ljubljana (Slovenia); Mikuz, M. [Jozef Stefan Institute, Jamova 39, Ljubljana (Slovenia); Faculty of Mathematics and Physics, University of Ljubljana (Slovenia); Bronner, J.; Hartet, J. [Physikalisches Institut, Universitat Freiburg, Hermann-Herder-Str. 3, Freiburg (Germany); Franz, S. [CERN, Geneva (Switzerland)

    2009-07-01

    In experiments at Large Hadron Collider, detectors and electronics will be exposed to high fluxes of photons, charged particles and neutrons. Damage caused by the radiation will influence performance of detectors. It will therefore be important to continuously monitor the radiation dose in order to follow the level of degradation of detectors and electronics and to correctly predict future radiation damage. A system for online radiation monitoring using semiconductor radiation sensors at large number of locations has been installed in the ATLAS experiment. Ionizing dose in SiO{sub 2} will be measured with RadFETs, displacement damage in silicon in units of 1-MeV(Si) equivalent neutron fluence with p-i-n diodes. At 14 monitoring locations where highest radiation levels are expected the fluence of thermal neutrons will be measured from current gain degradation in dedicated bipolar transistors. The design of the system and tests of its performance in mixed radiation field is described in this paper. First results from this test campaign confirm that doses can be measured with sufficient sensitivity (mGy for total ionizing dose measurements, 10{sup 9} n/cm{sup 2} for NIEL (non-ionizing energy loss) measurements, 10{sup 12} n/cm{sup 2} for thermal neutrons) and accuracy (about 20%) for usage in the ATLAS detector

  13. Ontario Hydro's environmental monitoring program for HV [high voltage] transmission line projects

    International Nuclear Information System (INIS)

    Braekevelt, P.N.

    1991-01-01

    Responsible monitoring and control of environmental impacts is key to obtaining future needed approvals for new high voltage (HV) transmission line projects. Ontario Hydro's environmental monitoring program was developed as a highly structured, self-imposed monitoring system to relieve government agencies of the responsibility of developing a similar external program. The goal was to be self-policing. The historical development, program structure, standards, priority ratings, documentation, communication and computerization of the program is described. The most effective way to minimize environmental impacts is to avoid sensitive features at the route selection stage, well before any construction takes place. The environmental monitoring program is based on the following blueprint: each crew member is responsible for environmental protection; environmental problems are to be resolved at the lowest level possible; potential concerns should be resolved before they become problems; known problems should be dealt with quickly to minimize impacts; team members should work cooperatively; and formal and regular communication is emphasized

  14. Analysis Facility infrastructure (TIER3) for ATLAS High Energy physics experiment

    International Nuclear Information System (INIS)

    Gonzalez de la Hoz, S.; March, L.; Ros, E.; Sanchez, J.; Amoros, G.; Fassi, F.; Fernandez, A.; Kaci, M.; Lamas, A.; Salt, J.

    2007-01-01

    ATLAS project has been asked to define the scope and role of Tier-3 resources (facilities or centres) within the existing ATLAS computing model, activities and facilities. This document attempts to address these questions by describing Tier-3 resources generally, and their relationship to the ATLAS Software and Computing Project. Originally the tiered computing model came out of MONARC (see http://monarc.web.cern.ch/MONARC/) work and was predicated upon the network being a scarce resource. In this model the tiered hierarchy ranged from the Tier-0 (CERN) down to the desktop or workstation (Tier 3). The focus on defining the roles of each tiered component has evolved with the initial emphasis on the Tier-0 (CERN) and Tier-1 (National centres) definition and roles. The various LHC projects, including ATLAS, then evolved the tiered hierarchy to include Tier-2s (Regional centers) as part of their projects. Tier-3s, on the other hand, have (implicitly and sometime explicitly) been defined as whatever an institution could construct to support their Physics goals using institutional and otherwise leveraged resources and therefore have not been considered to be part of the official ATLAS Research Program computing resources nor under their control, meaning there is no formal MOU process to designate sites as Tier-3s and no formal control of the program over the Tier-3 resources. Tier-3s are the responsibility of individual institutions to define, fund, deploy and support. However, having noted this, we must also recognize that Tier-3s must exist and will have implications for how our computing model should support ATLAS physicists. Tier-3 users will want to access data and simulations and will want to enable their Tier-3 resources to support their analysis and simulation work. Tiers 3s are an important resource for physicists to analyze LHC (Large Hadron Collider) data. This document will define how Tier-3s should best interact with the ATLAS computing model, detail the

  15. The community environmental monitoring program: a model for stakeholder involvement in environmental monitoring

    International Nuclear Information System (INIS)

    Hartwell, William T.; Shafer, David S.

    2007-01-01

    Since 1981, the Community Environmental Monitoring Program (CEMP) has involved stakeholders directly in its daily operation and data collection, as well as in dissemination of information on radiological surveillance in communities surrounding the Nevada Test Site (NTS), the primary location where the United States (US) conducted nuclear testing until 1992. The CEMP is funded by the US Department of Energy's National Nuclear Security Administration, and is administered by the Desert Research Institute (DRI) of the Nevada System of Higher Education. The CEMP provides training workshops for stakeholders involved in the program, and educational outreach to address public concerns about health risk and environmental impacts from past and ongoing NTS activities. The network includes 29 monitoring stations located across an approximately 160,000 km 2 area of Nevada, Utah and California in the southwestern US. The principal radiological instruments are pressurized ion chambers for measuring gamma radiation, and particulate air samplers, primarily for alpha/beta detection. Stations also employ a full suite of meteorological instruments, allowing for improved interpretation of the effects of meteorological events on background radiation levels. Station sensors are wired to state-of-the-art data-loggers that are capable of several weeks of on-site data storage, and that work in tandem with a communications system that integrates DSL and wireless internet, land line and cellular phone, and satellite technologies for data transfer. Data are managed through a platform maintained by the Western Regional Climate Center (WRCC) that DRI operates for the U.S. National Oceanic and Atmospheric Administration. The WRCC platform allows for near real-time upload and display of current monitoring information in tabular and graphical formats on a public web site. Archival data for each station are also available on-line, providing the ability to perform trending analyses or calculate site

  16. Dear ATLAS colleagues,

    CERN Multimedia

    PH Department

    2008-01-01

    We are collecting old pairs of glasses to take out to Mali, where they can be re-used by people there. The price for a pair of glasses can often exceed 3 months salary, so they are prohibitively expensive for many people. If you have any old spectacles you can donate, please put them in the special box in the ATLAS secretariat, bldg.40-4-D01 before the Christmas closure on 19 December so we can take them with us when we leave for Africa at the end of the month. (more details in ATLAS e-news edition of 29 September 2008: http://atlas-service-enews.web.cern.ch/atlas-service-enews/news/news_mali.php) many thanks! Katharine Leney co-driver of the ATLAS car on the Charity Run to Mali

  17. Encoding atlases by randomized classification forests for efficient multi-atlas label propagation.

    Science.gov (United States)

    Zikic, D; Glocker, B; Criminisi, A

    2014-12-01

    We propose a method for multi-atlas label propagation (MALP) based on encoding the individual atlases by randomized classification forests. Most current approaches perform a non-linear registration between all atlases and the target image, followed by a sophisticated fusion scheme. While these approaches can achieve high accuracy, in general they do so at high computational cost. This might negatively affect the scalability to large databases and experimentation. To tackle this issue, we propose to use a small and deep classification forest to encode each atlas individually in reference to an aligned probabilistic atlas, resulting in an Atlas Forest (AF). Our classifier-based encoding differs from current MALP approaches, which represent each point in the atlas either directly as a single image/label value pair, or by a set of corresponding patches. At test time, each AF produces one probabilistic label estimate, and their fusion is done by averaging. Our scheme performs only one registration per target image, achieves good results with a simple fusion scheme, and allows for efficient experimentation. In contrast to standard forest schemes, in which each tree would be trained on all atlases, our approach retains the advantages of the standard MALP framework. The target-specific selection of atlases remains possible, and incorporation of new scans is straightforward without retraining. The evaluation on four different databases shows accuracy within the range of the state of the art at a significantly lower running time. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. 32 CFR 34.41 - Monitoring and reporting program and financial performance.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 1 2010-07-01 2010-07-01 false Monitoring and reporting program and financial performance. 34.41 Section 34.41 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DoD... ORGANIZATIONS Post-award Requirements Reports and Records § 34.41 Monitoring and reporting program and financial...

  19. Upgrades of the ATLAS Muon Spectrometer with sMDT Chambers

    CERN Document Server

    Ferretti, Claudio; The ATLAS collaboration

    2015-01-01

    With half the drift-tube diameter of the Monitored Drift Tube (MDT) chambers of the ATLAS muon spectrometer and otherwise unchanged operating parameters, small-diameter Muon Drift Tube (sMDT) chambers provide an order of magnitude higher rate capability and can be installed in detector regions where MDT chambers do not fit. The chamber assembly time has been reduced by a factor of seven to one working day and the sense wire positioning accuracy improved by a factor of two to better than ten microns. Two sMDT chambers have been installed in ATLAS in 2014 to improve the momentum resolution in the barrel part of the spectrometer. The construction of additional twelve chambers covering the feet regions of the ATLAS detector has started. It will be followed by the replacement of the MDT chambers at the ends of the barrel inner layer by sMDTs improving the Performance at the high expected background rates and providing space for additional RPC trigger chambers.

  20. Upgrades of the ATLAS Muon Spectrometer with sMDT Chambers

    CERN Document Server

    Ferretti, C

    2016-01-01

    With half the drift-tube diameter of the Monitored Drift Tube (MDT) chambers of the ATLAS muon spectrometer and otherwise unchanged operating parameters, small-diameter Muon Drift Tube (sMDT) chambers provide an order of magnitude higher rate capability and can be installed in detector regions where MDT chambers do not fit. The chamber assembly time has been reduced by a factor of seven to one working day and the sense wire positioning accuracy improved by a factor of two to better than ten microns. Two sMDT chambers have been installed in ATLAS in 2014 to improve the momentum resolution in the barrel part of the spectrometer. The construction of an additional twelve chambers covering the feet regions of the ATLAS detector has started. It will be followed by the replacement of the MDT chambers at the ends of the barrel inner layer by sMDTs improving the Performance at the high expected background rates and providing space for additional RPC trigger chambers.

  1. Operation of the Upgraded ATLAS Level-1 Central Trigger System

    CERN Document Server

    Glatzer, Julian Maximilian Volker; The ATLAS collaboration

    2015-01-01

    The ATLAS Level-1 Central Trigger (L1CT) system is a central part of ATLAS data-taking and has undergone a major upgrade for Run 2 of the LHC, in order to cope with the expected increase of instantaneous luminosity of a factor of 2 with respect to Run 1. The upgraded hardware offers more flexibility in the trigger decisions due to the double amount of trigger inputs and usable trigger channels. It also provides an interface to the new topological trigger system. Operationally - particularly useful for commissioning, calibration and test runs - it allows concurrent running of up to 3 different sub-detector combinations. In this contribution, we give an overview of the operational software framework of the L1CT system with particular emphasis of the configuration, controls and monitoring aspects. The software framework allows a consistent configuration with respect to the ATLAS experiment and the LHC machine, upstream and downstream trigger processors, and the data acquisition. Trigger and dead-time rates are m...

  2. The Savannah River Site's Groundwater Monitoring Program

    Energy Technology Data Exchange (ETDEWEB)

    1992-08-03

    This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted during the first quarter of 1992. It includes the analytical data, field data, data review, quality control, and other documentation for this program; provides a record of the program's activities; and serves as an official document of the analytical results.

  3. Silicon strip detectors for the ATLAS upgrade

    CERN Document Server

    Gonzalez Sevilla, S; The ATLAS collaboration

    2011-01-01

    The Large Hadron Collider at CERN will extend its current physics program by increasing the peak luminosity by one order of magnitude. For ATLAS, one of the two general-purpose experiments of the LHC, an upgrade scenario will imply the complete replacement of its internal tracker due to the harsh conditions in terms of particle rates and radiation doses. New radiation-hard prototype n-in-p silicon sensors have been produced for the short-strip region of the future ATLAS tracker. The sensors have been irradiated up to the fluences expected in the high-luminous LHC collider. This paper summarizes recent results on the performance of the irradiated n-in-p detectors.

  4. Establishing monitoring programs for travel time reliability.

    Science.gov (United States)

    2014-01-01

    Within the second Strategic Highway Research Program (SHRP 2), Project L02 focused on creating a suite of methods by which transportation agencies could monitor and evaluate travel time reliability. Creation of the methods also produced an improved u...

  5. Active sites environmental monitoring program FY 1997 annual report

    International Nuclear Information System (INIS)

    Morrissey, C.M.; Marshall, D.S.; Cunningham, G.R.

    1998-03-01

    This report summarizes the activities conducted by the Active Sites Environmental Monitoring Program (ASEMP) from October 1996 through September 1997. The purpose of the program is to provide early detection and performance monitoring at active low-level waste (LLW) disposal sites in Solid Waste Storage Area (SWSA) 6 and transuranic (TRU) waste storage sites in SWSA 5 North. This report continues a series of annual and semiannual reports that present the results of ASEMP monitoring activities. This report details monitoring results for fiscal year (FY) 1997 from SWSA 6, including the Interim Waste Management Facility (IWMF) and the Hillcut Disposal Test Facility (HDTF), and (2) TRU-waste storage areas in SWSA 5 N. This report presents a summary of the methodology used to gather data for each major area along with the FY 1997 results. Figures referenced in the text are found in Appendix A and data tables are presented in Appendix B

  6. Oak Ridge Y-12 Plant biological monitoring and abatement program (BMAP) plan

    Energy Technology Data Exchange (ETDEWEB)

    Adams, S.M.; Brandt, C.C.; Cicerone, D.S. [and others

    1998-02-01

    The proposed Biological Monitoring and Abatement Program (BMAP) for East Fork Poplar Creek (EFPC) at the Oak Ridge Y-12 Plant, as described, will be conducted for the duration of the National Pollutant Discharge Elimination System permit issued for the Y-12 Plant on April 28, 1995, and which became effective July 1, 1995. The basic approach to biological monitoring used in this program was developed by the staff in the Environmental Sciences Division at the Oak Ridge National Laboratory at the request of Y-12 Plant personnel. The proposed BMAP plan is based on results of biological monitoring conducted since 1985. Details of the specific procedures used in the current routine monitoring program are provided, but experimental designs for future studies are described in less detail. The overall strategy used in developing this plan was, and continues to be, to use the results obtained from each task to define the scope of future monitoring efforts. Such efforts may require more intensive sampling than initially proposed in some areas or a reduction in sampling intensity in others. By using the results of previous monitoring efforts to define the current program and to guide them in the development of future studies, an effective integrated monitoring program has been developed to assess the impacts of the Y-12 Plant operation on the biota of EFPC and to document the ecological effects of remedial actions.

  7. Oak Ridge Y-12 Plant biological monitoring and abatement program (BMAP) plan

    International Nuclear Information System (INIS)

    Adams, S.M.; Brandt, C.C.; Cicerone, D.S.

    1998-02-01

    The proposed Biological Monitoring and Abatement Program (BMAP) for East Fork Poplar Creek (EFPC) at the Oak Ridge Y-12 Plant, as described, will be conducted for the duration of the National Pollutant Discharge Elimination System permit issued for the Y-12 Plant on April 28, 1995, and which became effective July 1, 1995. The basic approach to biological monitoring used in this program was developed by the staff in the Environmental Sciences Division at the Oak Ridge National Laboratory at the request of Y-12 Plant personnel. The proposed BMAP plan is based on results of biological monitoring conducted since 1985. Details of the specific procedures used in the current routine monitoring program are provided, but experimental designs for future studies are described in less detail. The overall strategy used in developing this plan was, and continues to be, to use the results obtained from each task to define the scope of future monitoring efforts. Such efforts may require more intensive sampling than initially proposed in some areas or a reduction in sampling intensity in others. By using the results of previous monitoring efforts to define the current program and to guide them in the development of future studies, an effective integrated monitoring program has been developed to assess the impacts of the Y-12 Plant operation on the biota of EFPC and to document the ecological effects of remedial actions

  8. Recent ATLAS Articles on WLAP

    CERN Multimedia

    Goldfarb, S

    2005-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Software Week Plenary 6-10 December 2004 North American ATLAS Physics Workshop (Tucson) 20-21 December 2004 (17 talks) Physics Analysis Tools Tutorial (Tucson) 19 December 2004 Full Chain Tutorial 21 September 2004 ATLAS Plenary Sessions, 17-18 February 2005 (17 talks) Coming soon: ATLAS Tutorial on Electroweak Physics, 14 Feb. 2005 Software Workshop, 21-22 February 2005 Click here to browse WLAP for all ATLAS lectures.

  9. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    Science.gov (United States)

    Ritsch, E.; Atlas Collaboration

    2014-06-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  10. The Community Environmental Monitoring Program in the 21st Century: The Evolution of a Monitoring Network

    International Nuclear Information System (INIS)

    Hartwell, W.T.; Tappen, J.; Karr, L.

    2007-01-01

    This paper focuses on the evolution of the various operational aspects of the Community Environmental Monitoring Program (CEMP) network following the transfer of program administration from the U.S. Environmental Protection Agency (EPA) to the Desert Research Institute (DRI) of the Nevada System of Higher Education in 1999-2000. The CEMP consists of a network of 29 fixed radiation and weather monitoring stations located in Nevada, Utah, and California. Its mission is to involve stakeholders directly in monitoring for airborne radiological releases to the off site environment as a result of past or ongoing activities on the Nevada Test Site (NTS) and to make data as transparent and accessible to the general public as feasible. At its inception in 1981, the CEMP was a cooperative project of the U.S. Department of Energy (DOE), DRI, and EPA. In 1999-2000, technical administration of the CEMP transitioned from EPA to DRI. Concurrent with and subsequent to this transition, station and program operations underwent significant enhancements that furthered the mission of the program. These enhancements included the addition of a full suite of meteorological instrumentation, state-of-the-art electronic data collectors, on-site displays, and communications hardware. A public website was developed. Finally, the DRI developed a mobile monitoring station that can be operated entirely on solar power in conjunction with a deep-cell battery, and includes all meteorological sensors and a pressurized ion chamber for detecting background gamma radiation. Final station configurations have resulted in the creation of a platform that is well suited for use as an in-field multi-environment test-bed for prototype environmental sensors and in interfacing with other scientific and educational programs. Recent and near-future collaborators have included federal, state, and local agencies in both the government and private sectors. The CEMP also serves as a model for other programs wishing to

  11. Multi-atlas pancreas segmentation: Atlas selection based on vessel structure.

    Science.gov (United States)

    Karasawa, Ken'ichi; Oda, Masahiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Chu, Chengwen; Zheng, Guoyan; Rueckert, Daniel; Mori, Kensaku

    2017-07-01

    Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. ATLAS people can run!

    CERN Multimedia

    Claudia Marcelloni de Oliveira; Pauline Gagnon

    It must be all the training we are getting every day, running around trying to get everything ready for the start of the LHC next year. This year, the ATLAS runners were in fine form and came in force. Nine ATLAS teams signed up for the 37th Annual CERN Relay Race with six runners per team. Under a blasting sun on Wednesday 23rd May 2007, each team covered the distances of 1000m, 800m, 800m, 500m, 500m and 300m taking the runners around the whole Meyrin site, hills included. A small reception took place in the ATLAS secretariat a week later to award the ATLAS Cup to the best ATLAS team. For the details on this complex calculation which takes into account the age of each runner, their gender and the color of their shoes, see the July 2006 issue of ATLAS e-news. The ATLAS Running Athena Team, the only all-women team enrolled this year, won the much coveted ATLAS Cup for the second year in a row. In fact, they are so good that Peter Schmid and Patrick Fassnacht are wondering about reducing the women's bonus in...

  13. Evaluation of nuclear power plant environmental impact prediction, based on monitoring programs. Summary and recommendations

    International Nuclear Information System (INIS)

    Gore, K.L.; Thomas, J.M.; Kannberg, L.D.; Watson, D.G.

    1977-02-01

    An evaluation of the effectivenss of non-radiological environmental monitoring programs is presented. The monitoring programs for Monticello, Haddam Neck, and Millstone Nuclear Generating Plants are discussed. Recommendations for improvements in monitoring programs are presented

  14. Load monitoring program: Status and results report. Volume 1: Summary

    International Nuclear Information System (INIS)

    1994-06-01

    British Columbia Hydro conducts a monitoring program to provide information on customer needs and values for planning; to measure customer response, energy savings impacts, and load shape impacts due to changes in rate level, rate restructuring, and Power Smart programs; to estimate end-use consumption and load shapes by customer class; and to provide load information for distribution and system load studies. To achieve these objectives, the monitoring program tracks the characteristics and energy use patterns of a sample of BC Hydro residential, commercial, and industrial customers over a period of several years. The entire sample will be surveyed periodically to obtain information on changes in building characteristics, equipment stocks, and energy-use behavior and attitudes. A report is provided on the status of monitoring program activities and some results obtained in 1993/94. For the residential sector, the results include typical load profiles, end-user demographics, and extent of electric space heating and water heating. In the commercial sector, customers were divided into two main groups. The large-building group was relatively well organized in terms of energy needs and participated in Power Smart programs. The small-building group was relatively energy-inefficient and relatively unaware of Power Smart programs. 43 figs., 15 tabs

  15. A framework for evaluating and designing citizen science programs for natural resources monitoring.

    Science.gov (United States)

    Chase, Sarah K; Levine, Arielle

    2016-06-01

    We present a framework of resource characteristics critical to the design and assessment of citizen science programs that monitor natural resources. To develop the framework we reviewed 52 citizen science programs that monitored a wide range of resources and provided insights into what resource characteristics are most conducive to developing citizen science programs and how resource characteristics may constrain the use or growth of these programs. We focused on 4 types of resource characteristics: biophysical and geographical, management and monitoring, public awareness and knowledge, and social and cultural characteristics. We applied the framework to 2 programs, the Tucson (U.S.A.) Bird Count and the Maui (U.S.A.) Great Whale Count. We found that resource characteristics such as accessibility, diverse institutional involvement in resource management, and social or cultural importance of the resource affected program endurance and success. However, the relative influence of each characteristic was in turn affected by goals of the citizen science programs. Although the goals of public engagement and education sometimes complimented the goal of collecting reliable data, in many cases trade-offs must be made between these 2 goals. Program goals and priorities ultimately dictate the design of citizen science programs, but for a program to endure and successfully meet its goals, program managers must consider the diverse ways that the nature of the resource being monitored influences public participation in monitoring. © 2016 Society for Conservation Biology.

  16. The Savannah River Site's groundwater monitoring program

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-18

    This report summarizes the Savannah River Site (SRS) groundwater monitoring program conducted by EPD/EMS in the first quarter of 1991. In includes the analytical data, field data, data review, quality control, and other documentation for this program, provides a record of the program's activities and rationale, and serves as an official document of the analytical results.

  17. Recent ATLAS Articles on WLAP

    CERN Multimedia

    J. Herr

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Physics Workshop 6-11 June 2005 June 2005 ATLAS Week Plenary Session Click here to browse WLAP for all ATLAS lectures.

  18. Process monitoring using a quality and technical surveillance program

    International Nuclear Information System (INIS)

    Rafferty, C.A.

    1995-01-01

    The purpose of process monitoring using a quality and technical surveillance program was to help ensure that manufactured clad went sets fully met technical and quality requirements established by the manufacturer and the customer and that line and program management were immediately alerted if any aspect of the manufacturing activities drifted out of acceptable limits. The quality and technical surveillance program provided a planned, scheduled approach to monitor key processes and documentation and certification systems to prevent noncompliances or any manufacturing discrepancies. These surveillances illuminated potential problem areas early enough to permit timely corrective actions to reverse negative trends that, if left uncorrected, could have resulted in deficient hardware. Significant schedule and cost impacts were eliminated. copyright 1995 American Institute of Physics

  19. The ATLAS Data Acquisition and High Level Trigger system

    International Nuclear Information System (INIS)

    2016-01-01

    This paper describes the data acquisition and high level trigger system of the ATLAS experiment at the Large Hadron Collider at CERN, as deployed during Run 1. Data flow as well as control, configuration and monitoring aspects are addressed. An overview of the functionality of the system and of its performance is presented and design choices are discussed.

  20. Operational Experience and Performance with the ATLAS Pixel detector

    CERN Document Server

    Yang, Hongtao; The ATLAS collaboration

    2018-01-01

    In this presentation, I will discuss the operation of ATLAS Pixel Detector during Run 2 proton-proton data-taking at √s=13 TeV in 2017. The topics to be covered include 1) the bandwidth issue and how it is mitigated through readout upgrade and threshold adjustment; 2) the auto-corrective actions; 3) monitoring of radiation effects.