WorldWideScience

Sample records for highly reliable trigger

  1. Reliability model analysis and primary experimental evaluation of laser triggered pulse trigger

    International Nuclear Information System (INIS)

    Chen Debiao; Yang Xinglin; Li Yuan; Li Jin

    2012-01-01

    High performance pulse trigger can enhance performance and stability of the PPS. It is necessary to evaluate the reliability of the LTGS pulse trigger, so we establish the reliability analysis model of this pulse trigger based on CARMES software, the reliability evaluation is accord with the statistical results. (authors)

  2. Reliable on-line storage in the ALICE High-Level Trigger

    Energy Technology Data Exchange (ETDEWEB)

    Kalcher, Sebastian; Lindenstruth, Volker [Kirchhoff Institute of Physics, University of Heidelberg (Germany)

    2009-07-01

    The on-line disk capacity within large computing clusters such as used in the ALICE High-Level Trigger (HLT) is often not used due to the inherent unreliability of the involved disks. With currently available hard drive capacities the total on-line capacity can be significant when compared to the storage requirements of present high energy physics experiments. In this talk we report on ClusterRAID, a reliable, distributed mass storage system, which allows to harness the (often unused) disk capacities of large cluster installations. The key paradigm of this system is to transform the local hard drive into a reliable device. It provides adjustable fault-tolerance by utilizing sophisticated error-correcting codes. To reduce the costs of coding and decoding operations the use of modern graphics processing units as co-processor has been investigated. Also, the utilization of low overhead, high performance communication networks has been examined. A prototype set up of the system exists within the HLT with 90 TB gross capacity.

  3. Reliability of physical examination for diagnosis of myofascial trigger points: a systematic review of the literature.

    Science.gov (United States)

    Lucas, Nicholas; Macaskill, Petra; Irwig, Les; Moran, Robert; Bogduk, Nikolai

    2009-01-01

    Trigger points are promoted as an important cause of musculoskeletal pain. There is no accepted reference standard for the diagnosis of trigger points, and data on the reliability of physical examination for trigger points are conflicting. To systematically review the literature on the reliability of physical examination for the diagnosis of trigger points. MEDLINE, EMBASE, and other sources were searched for articles reporting the reliability of physical examination for trigger points. Included studies were evaluated for their quality and applicability, and reliability estimates were extracted and reported. Nine studies were eligible for inclusion. None satisfied all quality and applicability criteria. No study specifically reported reliability for the identification of the location of active trigger points in the muscles of symptomatic participants. Reliability estimates varied widely for each diagnostic sign, for each muscle, and across each study. Reliability estimates were generally higher for subjective signs such as tenderness (kappa range, 0.22-1.0) and pain reproduction (kappa range, 0.57-1.00), and lower for objective signs such as the taut band (kappa range, -0.08-0.75) and local twitch response (kappa range, -0.05-0.57). No study to date has reported the reliability of trigger point diagnosis according to the currently proposed criteria. On the basis of the limited number of studies available, and significant problems with their design, reporting, statistical integrity, and clinical applicability, physical examination cannot currently be recommended as a reliable test for the diagnosis of trigger points. The reliability of trigger point diagnosis needs to be further investigated with studies of high quality that use current diagnostic criteria in clinically relevant patients.

  4. Study on application of a high-speed trigger-type SFCL (TSFCL) for interconnection of power systems with different reliabilities

    International Nuclear Information System (INIS)

    Kim, Hye Ji; Yoon, Yong Tae

    2016-01-01

    Highlights: • Application of TSFCL to interconnect systems with different reliabilities is proposed. • TSFCL protects a grid by preventing detrimental effects from being delivered through the interconnection line. • A high-speed TSFCL with high impedance for transmission systems is required to be developed. - Abstract: Interconnection of power systems is one effective way to improve power supply reliability. However, differences in the reliability of each power system create a greater obstacle for the stable interconnection of power systems, as after interconnection a high-reliability system is affected by frequent faults in low reliability side systems. Several power system interconnection methods, such as the back-to-back method and the installation of either transformers or series reactors, have been investigated to counteract the damage caused by faults in the other neighboring systems. However, these methods are uneconomical and require complex operational management plans. In this work, a high-speed trigger-type superconducting fault current limiter (TSFCL) with large-impedance is proposed as a solution to maintain reliability and power quality when a high reliability power system is interconnected with a low reliability power system. Through analysis of the reliability index for the numerical examples obtained from a PSCAD/EMTDC simulator, a high-speed TSFCL with a large-impedance is confirmed to be effective for the interconnection between power systems with different reliabilities.

  5. A high-voltage triggered pseudospark discharge experiment

    International Nuclear Information System (INIS)

    Ramaswamy, K.; Destler, W.W.; Rodgers, J.

    1996-01-01

    The design and execution of a pulsed high-voltage (350 endash 400 keV) triggered pseudospark discharge experiment is reported. Experimental studies were carried out to obtain an optimal design for stable and reliable pseudospark operation in a high-voltage regime (approx-gt 350 kV). Experiments were performed to determine the most suitable fill gas for electron-beam formation. The pseudospark discharge is initiated by a trigger mechanism involving a flashover between the trigger electrode and hollow cathode housing. Experimental results characterizing the electron-beam energy using the range-energy method are reported. Source size imaging was carried out using an x-ray pinhole camera and a novel technique using Mylar as a witness plate. It was experimentally determined that strong pinching occurred later in time and was associated with the lower-energy electrons. copyright 1996 American Institute of Physics

  6. High reliability low jitter 80 kV pulse generator

    International Nuclear Information System (INIS)

    Savage, Mark Edward; Stoltzfus, Brian Scott

    2009-01-01

    Switching can be considered to be the essence of pulsed power. Time accurate switch/trigger systems with low inductance are useful in many applications. This article describes a unique switch geometry coupled with a low-inductance capacitive energy store. The system provides a fast-rising high voltage pulse into a low impedance load. It can be challenging to generate high voltage (more than 50 kilovolts) into impedances less than 10 (Omega), from a low voltage control signal with a fast rise time and high temporal accuracy. The required power amplification is large, and is usually accomplished with multiple stages. The multiple stages can adversely affect the temporal accuracy and the reliability of the system. In the present application, a highly reliable and low jitter trigger generator was required for the Z pulsed-power facility [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats,J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K.W. Struve, W.A. Stygar, L.K. Warne, and J. R. Woodworth, 2007 IEEE Pulsed Power Conference, Albuquerque, NM (IEEE, Piscataway, NJ, 2007), p. 979]. The large investment in each Z experiment demands low prefire probability and low jitter simultaneously. The system described here is based on a 100 kV DC-charged high-pressure spark gap, triggered with an ultraviolet laser. The system uses a single optical path for simultaneously triggering two parallel switches, allowing lower inductance and electrode erosion with a simple optical system. Performance of the system includes 6 ns output rise time into 5.6 (Omega), 550 ps one-sigma jitter measured from the 5 V trigger to the high voltage output, and misfire probability less than 10 -4 . The design of the system and some key measurements will be shown in the paper. We will discuss the design goals related to high reliability and low jitter. While

  7. Sub-nanosecond jitter, repetitive impulse generators for high reliability applications

    International Nuclear Information System (INIS)

    Krausse, G.J.; Sarjeant, W.J.

    1981-01-01

    Low jitter, high reliability impulse generator development has recently become of ever increasing importance for developing nuclear physics and weapons applications. The research and development of very low jitter (< 30 ps), multikilovolt generators for high reliability, minimum maintenance trigger applications utilizing a new class of high-pressure tetrode thyratrons now commercially available are described. The overall system design philosophy is described followed by a detailed analysis of the subsystem component elements. A multi-variable experimental analysis of this new tetrode thyratron was undertaken, in a low-inductance configuration, as a function of externally available parameters. For specific thyratron trigger conditions, rise times of 18 ns into 6.0-Ω loads were achieved at jitters as low as 24 ps. Using this database, an integrated trigger generator system with solid-state front-end is described in some detail. The generator was developed to serve as the Master Trigger Generator for a large neutrino detector installation at the Los Alamos Meson Physics Facility

  8. Commissioning of the CMS High-Level Trigger with Cosmic Rays

    CERN Document Server

    Chatrchyan, S; Sirunyan, A M; Adam, W; Arnold, B; Bergauer, H; Bergauer, T; Dragicevic, M; Eichberger, M; Erö, J; Friedl, M; Frühwirth, R; Ghete, V M; Hammer, J; Hänsel, S; Hoch, M; Hörmann, N; Hrubec, J; Jeitler, M; Kasieczka, G; Kastner, K; Krammer, M; Liko, D; Magrans de Abril, I; Mikulec, I; Mittermayr, F; Neuherz, B; Oberegger, M; Padrta, M; Pernicka, M; Rohringer, H; Schmid, S; Schöfbeck, R; Schreiner, T; Stark, R; Steininger, H; Strauss, J; Taurok, A; Teischinger, F; Themel, T; Uhl, D; Wagner, P; Waltenberger, W; Walzel, G; Widl, E; Wulz, C E; Chekhovsky, V; Dvornikov, O; Emeliantchik, I; Litomin, A; Makarenko, V; Marfin, I; Mossolov, V; Shumeiko, N; Solin, A; Stefanovitch, R; Suarez Gonzalez, J; Tikhonov, A; Fedorov, A; Karneyeu, A; Korzhik, M; Panov, V; Zuyeuski, R; Kuchinsky, P; Beaumont, W; Benucci, L; Cardaci, M; De Wolf, E A; Delmeire, E; Druzhkin, D; Hashemi, M; Janssen, X; Maes, T; Mucibello, L; Ochesanu, S; Rougny, R; Selvaggi, M; Van Haevermaet, H; Van Mechelen, P; Van Remortel, N; Adler, V; Beauceron, S; Blyweert, S; D'Hondt, J; De Weirdt, S; Devroede, O; Heyninck, J; Kalogeropoulos, A; Maes, J; Maes, M; Mozer, M U; Tavernier, S; Van Doninck, W; Van Mulders, P; Villella, I; Bouhali, O; Chabert, E C; Charaf, O; Clerbaux, B; De Lentdecker, G; Dero, V; Elgammal, S; Gay, A P R; Hammad, G H; Marage, P E; Rugovac, S; Vander Velde, C; Vanlaer, P; Wickens, J; Grunewald, M; Klein, B; Marinov, A; Ryckbosch, D; Thyssen, F; Tytgat, M; Vanelderen, L; Verwilligen, P; Basegmez, S; Bruno, G; Caudron, J; Delaere, C; Demin, P; Favart, D; Giammanco, A; Grégoire, G; Lemaitre, V; Militaru, O; Ovyn, S; Piotrzkowski, K; Quertenmont, L; Schul, N; Beliy, N; Daubie, E; Alves, G A; Pol, M E; Souza, M H G; Carvalho, W; De Jesus Damiao, D; De Oliveira Martins, C; Fonseca De Souza, S; Mundim, L; Oguri, V; Santoro, A; Silva Do Amaral, S M; Sznajder, A; Fernandez Perez Tomei, T R; Ferreira Dias, M A; Gregores, E M; Novaes, S F; Abadjiev, K; Anguelov, T; Damgov, J; Darmenov, N; Dimitrov, L; Genchev, V; Iaydjiev, P; Piperov, S; Stoykova, S; Sultanov, G; Trayanov, R; Vankov, I; Dimitrov, A; Dyulendarova, M; Kozhuharov, V; Litov, L; Marinova, E; Mateev, M; Pavlov, B; Petkov, P; Toteva, Z; Chen, G M; Chen, H S; Guan, W; Jiang, C H; Liang, D; Liu, B; Meng, X; Tao, J; Wang, J; Wang, Z; Xue, Z; Zhang, Z; Ban, Y; Cai, J; Ge, Y; Guo, S; Hu, Z; Mao, Y; Qian, S J; Teng, H; Zhu, B; Avila, C; Baquero Ruiz, M; Carrillo Montoya, C A; Gomez, A; Gomez Moreno, B; Ocampo Rios, A A; Osorio Oliveros, A F; Reyes Romero, D; Sanabria, J C; Godinovic, N; Lelas, K; Plestina, R; Polic, D; Puljak, I; Antunovic, Z; Dzelalija, M; Brigljevic, V; Duric, S; Kadija, K; Morovic, S; Fereos, R; Galanti, M; Mousa, J; Papadakis, A; Ptochos, F; Razis, P A; Tsiakkouri, D; Zinonos, Z; Hektor, A; Kadastik, M; Kannike, K; Müntel, M; Raidal, M; Rebane, L; Anttila, E; Czellar, S; Härkönen, J; Heikkinen, A; Karimäki, V; Kinnunen, R; Klem, J; Kortelainen, M J; Lampén, T; Lassila-Perini, K; Lehti, S; Lindén, T; Luukka, P; Mäenpää, T; Nysten, J; Tuominen, E; Tuominiemi, J; Ungaro, D; Wendland, L; Banzuzi, K; Korpela, A; Tuuva, T; Nedelec, P; Sillou, D; Besancon, M; Chipaux, R; Dejardin, M; Denegri, D; Descamps, J; Fabbro, B; Faure, J L; Ferri, F; Ganjour, S; Gentit, F X; Givernaud, A; Gras, P; Hamel de Monchenault, G; Jarry, P; Lemaire, M C; Locci, E; Malcles, J; Marionneau, M; Millischer, L; Rander, J; Rosowsky, A; Rousseau, D; Titov, M; Verrecchia, P; Baffioni, S; Bianchini, L; Bluj, M; Busson, P; Charlot, C; Dobrzynski, L; Granier de Cassagnac, R; Haguenauer, M; Miné, P; Paganini, P; Sirois, Y; Thiebaux, C; Zabi, A; Agram, J L; Besson, A; Bloch, D; Bodin, D; Brom, J M; Conte, E; Drouhin, F; Fontaine, J C; Gelé, D; Goerlach, U; Gross, L; Juillot, P; Le Bihan, A C; Patois, Y; Speck, J; Van Hove, P; Baty, C; Bedjidian, M; Blaha, J; Boudoul, G; Brun, H; Chanon, N; Chierici, R; Contardo, D; Depasse, P; Dupasquier, T; El Mamouni, H; Fassi, F; Fay, J; Gascon, S; Ille, B; Kurca, T; Le Grand, T; Lethuillier, M; Lumb, N; Mirabito, L; Perries, S; Vander Donckt, M; Verdier, P; Djaoshvili, N; Roinishvili, N; Roinishvili, V; Amaglobeli, N; Adolphi, R; Anagnostou, G; Brauer, R; Braunschweig, W; Edelhoff, M; Esser, H; Feld, L; Karpinski, W; Khomich, A; Klein, K; Mohr, N; Ostaptchouk, A; Pandoulas, D; Pierschel, G; Raupach, F; Schael, S; Schultz von Dratzig, A; Schwering, G; Sprenger, D; Thomas, M; Weber, M; Wittmer, B; Wlochal, M; Actis, O; Altenhöfer, G; Bender, W; Biallass, P; Erdmann, M; Fetchenhauer, G; Frangenheim, J; Hebbeker, T; Hilgers, G; Hinzmann, A; Hoepfner, K; Hof, C; Kirsch, M; Klimkovich, T; Kreuzer, P; Lanske, D; Merschmeyer, M; Meyer, A; Philipps, B; Pieta, H; Reithler, H; Schmitz, S A; Sonnenschein, L; Sowa, M; Steggemann, J; Szczesny, H; Teyssier, D; Zeidler, C; Bontenackels, M; Davids, M; Duda, M; Flügge, G; Geenen, H; Giffels, M; Haj Ahmad, W; Hermanns, T; Heydhausen, D; Kalinin, S; Kress, T; Linn, A; Nowack, A; Perchalla, L; Poettgens, M; Pooth, O; Sauerland, P; Stahl, A; Tornier, D; Zoeller, M H; Aldaya Martin, M; Behrens, U; Borras, K; Campbell, A; Castro, E; Dammann, D; Eckerlin, G; Flossdorf, A; Flucke, G; Geiser, A; Hatton, D; Hauk, J; Jung, H; Kasemann, M; Katkov, I; Kleinwort, C; Kluge, H; Knutsson, A; Kuznetsova, E; Lange, W; Lohmann, W; Mankel, R; Marienfeld, M; Meyer, A B; Miglioranzi, S; Mnich, J; Ohlerich, M; Olzem, J; Parenti, A; Rosemann, C; Schmidt, R; Schoerner-Sadenius, T; Volyanskyy, D; Wissing, C; Zeuner, W D; Autermann, C; Bechtel, F; Draeger, J; Eckstein, D; Gebbert, U; Kaschube, K; Kaussen, G; Klanner, R; Mura, B; Naumann-Emme, S; Nowak, F; Pein, U; Sander, C; Schleper, P; Schum, T; Stadie, H; Steinbrück, G; Thomsen, J; Wolf, R; Bauer, J; Blüm, P; Buege, V; Cakir, A; Chwalek, T; De Boer, W; Dierlamm, A; Dirkes, G; Feindt, M; Felzmann, U; Frey, M; Furgeri, A; Gruschke, J; Hackstein, C; Hartmann, F; Heier, S; Heinrich, M; Held, H; Hirschbuehl, D; Hoffmann, K H; Honc, S; Jung, C; Kuhr, T; Liamsuwan, T; Martschei, D; Mueller, S; Müller, Th; Neuland, M B; Niegel, M; Oberst, O; Oehler, A; Ott, J; Peiffer, T; Piparo, D; Quast, G; Rabbertz, K; Ratnikov, F; Ratnikova, N; Renz, M; Saout, C; Sartisohn, G; Scheurer, A; Schieferdecker, P; Schilling, F P; Schott, G; Simonis, H J; Stober, F M; Sturm, P; Troendle, D; Trunov, A; Wagner, W; Wagner-Kuhr, J; Zeise, M; Zhukov, V; Ziebarth, E B; Daskalakis, G; Geralis, T; Karafasoulis, K; Kyriakis, A; Loukas, D; Markou, A; Markou, C; Mavrommatis, C; Petrakou, E; Zachariadou, A; Gouskos, L; Katsas, P; Panagiotou, A; Evangelou, I; Kokkas, P; Manthos, N; Papadopoulos, I; Patras, V; Triantis, F A; Bencze, G; Boldizsar, L; Debreczeni, G; Hajdu, C; Hernath, S; Hidas, P; Horvath, D; Krajczar, K; Laszlo, A; Patay, G; Sikler, F; Toth, N; Vesztergombi, G; Beni, N; Christian, G; Imrek, J; Molnar, J; Novak, D; Palinkas, J; Szekely, G; Szillasi, Z; Tokesi, K; Veszpremi, V; Kapusi, A; Marian, G; Raics, P; Szabo, Z; Trocsanyi, Z L; Ujvari, B; Zilizi, G; Bansal, S; Bawa, H S; Beri, S B; Bhatnagar, V; Jindal, M; Kaur, M; Kaur, R; Kohli, J M; Mehta, M Z; Nishu, N; Saini, L K; Sharma, A; Singh, A; Singh, J B; Singh, S P; Ahuja, S; Arora, S; Bhattacharya, S; Chauhan, S; Choudhary, B C; Gupta, P; Jain, S; Jha, M; Kumar, A; Ranjan, K; Shivpuri, R K; Srivastava, A K; Choudhury, R K; Dutta, D; Kailas, S; Kataria, S K; Mohanty, A K; Pant, L M; Shukla, P; Topkar, A; Aziz, T; Guchait, M; Gurtu, A; Maity, M; Majumder, D; Majumder, G; Mazumdar, K; Nayak, A; Saha, A; Sudhakar, K; Banerjee, S; Dugad, S; Mondal, N K; Arfaei, H; Bakhshiansohi, H; Fahim, A; Jafari, A; Mohammadi Najafabadi, M; Moshaii, A; Paktinat Mehdiabadi, S; Rouhani, S; Safarzadeh, B; Zeinali, M; Felcini, M; Abbrescia, M; Barbone, L; Chiumarulo, F; Clemente, A; Colaleo, A; Creanza, D; Cuscela, G; De Filippis, N; De Palma, M; De Robertis, G; Donvito, G; Fedele, F; Fiore, L; Franco, M; Iaselli, G; Lacalamita, N; Loddo, F; Lusito, L; Maggi, G; Maggi, M; Manna, N; Marangelli, B; My, S; Natali, S; Nuzzo, S; Papagni, G; Piccolomo, S; Pierro, G A; Pinto, C; Pompili, A; Pugliese, G; Rajan, R; Ranieri, A; Romano, F; Roselli, G; Selvaggi, G; Shinde, Y; Silvestris, L; Tupputi, S; Zito, G; Abbiendi, G; Bacchi, W; Benvenuti, A C; Boldini, M; Bonacorsi, D; Braibant-Giacomelli, S; Cafaro, V D; Caiazza, S S; Capiluppi, P; Castro, A; Cavallo, F R; Codispoti, G; Cuffiani, M; D'Antone, I; Dallavalle, G M; Fabbri, F; Fanfani, A; Fasanella, D; Giacomelli, P; Giordano, V; Giunta, M; Grandi, C; Guerzoni, M; Marcellini, S; Masetti, G; Montanari, A; Navarria, F L; Odorici, F; Pellegrini, G; Perrotta, A; Rossi, A M; Rovelli, T; Siroli, G; Torromeo, G; Travaglini, R; Albergo, S; Costa, S; Potenza, R; Tricomi, A; Tuve, C; Barbagli, G; Broccolo, G; Ciulli, V; Civinini, C; D'Alessandro, R; Focardi, E; Frosali, S; Gallo, E; Genta, C; Landi, G; Lenzi, P; Meschini, M; Paoletti, S; Sguazzoni, G; Tropiano, A; Benussi, L; Bertani, M; Bianco, S; Colafranceschi, S; Colonna, D; Fabbri, F; Giardoni, M; Passamonti, L; Piccolo, D; Pierluigi, D; Ponzio, B; Russo, A; Fabbricatore, P; Musenich, R; Benaglia, A; Calloni, M; Cerati, G B; D'Angelo, P; De Guio, F; Farina, F M; Ghezzi, A; Govoni, P; Malberti, M; Malvezzi, S; Martelli, A; Menasce, D; Miccio, V; Moroni, L; Negri, P; Paganoni, M; Pedrini, D; Pullia, A; Ragazzi, S; Redaelli, N; Sala, S; Salerno, R; Tabarelli de Fatis, T; Tancini, V; Taroni, S; Buontempo, S; Cavallo, N; Cimmino, A; De Gruttola, M; Fabozzi, F; Iorio, A O M; Lista, L; Lomidze, D; Noli, P; Paolucci, P; Sciacca, C; Azzi, P; Bacchetta, N; Barcellan, L; Bellan, P; Bellato, M; Benettoni, M; Biasotto, M; Bisello, D; Borsato, E; Branca, A; Carlin, R; Castellani, L; Checchia, P; Conti, E; Dal Corso, F; De Mattia, M; Dorigo, T; Dosselli, U; Fanzago, F; Gasparini, F; Gasparini, U; Giubilato, P; Gonella, F; Gresele, A; Gulmini, M; Kaminskiy, A; Lacaprara, S; Lazzizzera, I; Margoni, M; Maron, G; Mattiazzo, S; Mazzucato, M; Meneghelli, M; Meneguzzo, A T; Michelotto, M; Montecassiano, F; Nespolo, M; Passaseo, M; Pegoraro, M; Perrozzi, L; Pozzobon, N; Ronchese, P; Simonetto, F; Toniolo, N; Torassa, E; Tosi, M; Triossi, A; Vanini, S; Ventura, S; Zotto, P; Zumerle, G; Baesso, P; Berzano, U; Bricola, S; Necchi, M M; Pagano, D; Ratti, S P; Riccardi, C; Torre, P; Vicini, A; Vitulo, P; Viviani, C; Aisa, D; Aisa, S; Babucci, E; Biasini, M; Bilei, G M; Caponeri, B; Checcucci, B; Dinu, N; Fanò, L; Farnesini, L; Lariccia, P; Lucaroni, A; Mantovani, G; Nappi, A; Piluso, A; Postolache, V; Santocchia, A; Servoli, L; Tonoiu, D; Vedaee, A; Volpe, R; Azzurri, P; Bagliesi, G; Bernardini, J; Berretta, L; Boccali, T; Bocci, A; Borrello, L; Bosi, F; Calzolari, F; Castaldi, R; Dell'Orso, R; Fiori, F; Foà, L; Gennai, S; Giassi, A; Kraan, A; Ligabue, F; Lomtadze, T; Mariani, F; Martini, L; Massa, M; Messineo, A; Moggi, A; Palla, F; Palmonari, F; Petragnani, G; Petrucciani, G; Raffaelli, F; Sarkar, S; Segneri, G; Serban, A T; Spagnolo, P; Tenchini, R; Tolaini, S; Tonelli, G; Venturi, A; Verdini, P G; Baccaro, S; Barone, L; Bartoloni, A; Cavallari, F; Dafinei, I; Del Re, D; Di Marco, E; Diemoz, M; Franci, D; Longo, E; Organtini, G; Palma, A; Pandolfi, F; Paramatti, R; Pellegrino, F; Rahatlou, S; Rovelli, C; Alampi, G; Amapane, N; Arcidiacono, R; Argiro, S; Arneodo, M; Biino, C; Borgia, M A; Botta, C; Cartiglia, N; Castello, R; Cerminara, G; Costa, M; Dattola, D; Dellacasa, G; Demaria, N; Dughera, G; Dumitrache, F; Graziano, A; Mariotti, C; Marone, M; Maselli, S; Migliore, E; Mila, G; Monaco, V; Musich, M; Nervo, M; Obertino, M M; Oggero, S; Panero, R; Pastrone, N; Pelliccioni, M; Romero, A; Ruspa, M; Sacchi, R; Solano, A; Staiano, A; Trapani, P P; Trocino, D; Vilela Pereira, A; Visca, L; Zampieri, A; Ambroglini, F; Belforte, S; Cossutti, F; Della Ricca, G; Gobbo, B; Penzo, A; Chang, S; Chung, J; Kim, D H; Kim, G N; Kong, D J; Park, H; Son, D C; Bahk, S Y; Song, S; Jung, S Y; Hong, B; Kim, H; Kim, J H; Lee, K S; Moon, D H; Park, S K; Rhee, H B; Sim, K S; Kim, J; Choi, M; Hahn, G; Park, I C; Choi, S; Choi, Y; Goh, J; Jeong, H; Kim, T J; Lee, J; Lee, S; Janulis, M; Martisiute, D; Petrov, P; Sabonis, T; Castilla Valdez, H; Sánchez Hernández, A; Carrillo Moreno, S; Morelos Pineda, A; Allfrey, P; Gray, R N C; Krofcheck, D; Bernardino Rodrigues, N; Butler, P H; Signal, T; Williams, J C; Ahmad, M; Ahmed, I; Ahmed, W; Asghar, M I; Awan, M I M; Hoorani, H R; Hussain, I; Khan, W A; Khurshid, T; Muhammad, S; Qazi, S; Shahzad, H; Cwiok, M; Dabrowski, R; Dominik, W; Doroba, K; Konecki, M; Krolikowski, J; Pozniak, K; Romaniuk, Ryszard; Zabolotny, W; Zych, P; Frueboes, T; Gokieli, R; Goscilo, L; Górski, M; Kazana, M; Nawrocki, K; Szleper, M; Wrochna, G; Zalewski, P; Almeida, N; Antunes Pedro, L; Bargassa, P; David, A; Faccioli, P; Ferreira Parracho, P G; Freitas Ferreira, M; Gallinaro, M; Guerra Jordao, M; Martins, P; Mini, G; Musella, P; Pela, J; Raposo, L; Ribeiro, P Q; Sampaio, S; Seixas, J; Silva, J; Silva, P; Soares, D; Sousa, M; Varela, J; Wöhri, H K; Altsybeev, I; Belotelov, I; Bunin, P; Ershov, Y; Filozova, I; Finger, M; Finger, M., Jr.; Golunov, A; Golutvin, I; Gorbounov, N; Kalagin, V; Kamenev, A; Karjavin, V; Konoplyanikov, V; Korenkov, V; Kozlov, G; Kurenkov, A; Lanev, A; Makankin, A; Mitsyn, V V; Moisenz, P; Nikonov, E; Oleynik, D; Palichik, V; Perelygin, V; Petrosyan, A; Semenov, R; Shmatov, S; Smirnov, V; Smolin, D; Tikhonenko, E; Vasil'ev, S; Vishnevskiy, A; Volodko, A; Zarubin, A; Zhiltsov, V; Bondar, N; Chtchipounov, L; Denisov, A; Gavrikov, Y; Gavrilov, G; Golovtsov, V; Ivanov, Y; Kim, V; Kozlov, V; Levchenko, P; Obrant, G; Orishchin, E; Petrunin, A; Shcheglov, Y; Shchetkovskiy, A; Sknar, V; Smirnov, I; Sulimov, V; Tarakanov, V; Uvarov, L; Vavilov, S; Velichko, G; Volkov, S; Vorobyev, A; Andreev, Yu; Anisimov, A; Antipov, P; Dermenev, A; Gninenko, S; Golubev, N; Kirsanov, M; Krasnikov, N; Matveev, V; Pashenkov, A; Postoev, V E; Solovey, A; Toropin, A; Troitsky, S; Baud, A; Epshteyn, V; Gavrilov, V; Ilina, N; Kaftanov, V; Kolosov, V; Kossov, M; Krokhotin, A; Kuleshov, S; Oulianov, A; Safronov, G; Semenov, S; Shreyber, I; Stolin, V; Vlasov, E; Zhokin, A; Boos, E; Dubinin, M; Dudko, L; Ershov, A; Gribushin, A; Klyukhin, V; Kodolova, O; Lokhtin, I; Petrushanko, S; Sarycheva, L; Savrin, V; Snigirev, A; Vardanyan, I; Dremin, I; Kirakosyan, M; Konovalova, N; Rusakov, S V; Vinogradov, A; Akimenko, S; Artamonov, A; Azhgirey, I; Bitioukov, S; Burtovoy, V; Grishin, V; Kachanov, V; Konstantinov, D; Krychkine, V; Levine, A; Lobov, I; Lukanin, V; Mel'nik, Y; Petrov, V; Ryutin, R; Slabospitsky, S; Sobol, A; Sytine, A; Tourtchanovitch, L; Troshin, S; Tyurin, N; Uzunian, A; Volkov, A; Adzic, P; Djordjevic, M; Jovanovic, D; Krpic, D; Maletic, D; Puzovic, J; Smiljkovic, N; Aguilar-Benitez, M; Alberdi, J; Alcaraz Maestre, J; Arce, P; Barcala, J M; Battilana, C; Burgos Lazaro, C; Caballero Bejar, J; Calvo, E; Cardenas Montes, M; Cepeda, M; Cerrada, M; Chamizo Llatas, M; Clemente, F; Colino, N; Daniel, M; De La Cruz, B; Delgado Peris, A; Diez Pardos, C; Fernandez Bedoya, C; Fernández Ramos, J P; Ferrando, A; Flix, J; Fouz, M C; Garcia-Abia, P; Garcia-Bonilla, A C; Gonzalez Lopez, O; Goy Lopez, S; Hernandez, J M; Josa, M I; Marin, J; Merino, G; Molina, J; Molinero, A; Navarrete, J J; Oller, J C; Puerta Pelayo, J; Romero, L; Santaolalla, J; Villanueva Munoz, C; Willmott, C; Yuste, C; Albajar, C; Blanco Otano, M; de Trocóniz, J F; Garcia Raboso, A; Lopez Berengueres, J O; Cuevas, J; Fernandez Menendez, J; Gonzalez Caballero, I; Lloret Iglesias, L; Naves Sordo, H; Vizan Garcia, J M; Cabrillo, I J; Calderon, A; Chuang, S H; Diaz Merino, I; Diez Gonzalez, C; Duarte Campderros, J; Fernandez, M; Gomez, G; Gonzalez Sanchez, J; Gonzalez Suarez, R; Jorda, C; Lobelle Pardo, P; Lopez Virto, A; Marco, J; Marco, R; Martinez Rivero, C; Martinez Ruiz del Arbol, P; Matorras, F; Rodrigo, T; Ruiz Jimeno, A; Scodellaro, L; Sobron Sanudo, M; Vila, I; Vilar Cortabitarte, R; Abbaneo, D; Albert, E; Alidra, M; Ashby, S; Auffray, E; Baechler, J; Baillon, P; Ball, A H; Bally, S L; Barney, D; Beaudette, F; Bellan, R; Benedetti, D; Benelli, G; Bernet, C; Bloch, P; Bolognesi, S; Bona, M; Bos, J; Bourgeois, N; Bourrel, T; Breuker, H; Bunkowski, K; Campi, D; Camporesi, T; Cano, E; Cattai, A; Chatelain, J P; Chauvey, M; Christiansen, T; Coarasa Perez, J A; Conde Garcia, A; Covarelli, R; Curé, B; De Roeck, A; Delachenal, V; Deyrail, D; Di Vincenzo, S; Dos Santos, S; Dupont, T; Edera, L M; Elliott-Peisert, A; Eppard, M; Favre, M; Frank, N; Funk, W; Gaddi, A; Gastal, M; Gateau, M; Gerwig, H; Gigi, D; Gill, K; Giordano, D; Girod, J P; Glege, F; Gomez-Reino Garrido, R; Goudard, R; Gowdy, S; Guida, R; Guiducci, L; Gutleber, J; Hansen, M; Hartl, C; Harvey, J; Hegner, B; Hoffmann, H F; Holzner, A; Honma, A; Huhtinen, M; Innocente, V; Janot, P; Le Godec, G; Lecoq, P; Leonidopoulos, C; Loos, R; Lourenço, C; Lyonnet, A; Macpherson, A; Magini, N; Maillefaud, J D; Maire, G; Mäki, T; Malgeri, L; Mannelli, M; Masetti, L; Meijers, F; Meridiani, P; Mersi, S; Meschi, E; Meynet Cordonnier, A; Moser, R; Mulders, M; Mulon, J; Noy, M; Oh, A; Olesen, G; Onnela, A; Orimoto, T; Orsini, L; Perez, E; Perinic, G; Pernot, J F; Petagna, P; Petiot, P; Petrilli, A; Pfeiffer, A; Pierini, M; Pimiä, M; Pintus, R; Pirollet, B; Postema, H; Racz, A; Ravat, S; Rew, S B; Rodrigues Antunes, J; Rolandi, G; Rovere, M; Ryjov, V; Sakulin, H; Samyn, D; Sauce, H; Schäfer, C; Schlatter, W D; Schröder, M; Schwick, C; Sciaba, A; Segoni, I; Sharma, A; Siegrist, N; Siegrist, P; Sinanis, N; Sobrier, T; Sphicas, P; Spiga, D; Spiropulu, M; Stöckli, F; Traczyk, P; Tropea, P; Troska, J; Tsirou, A; Veillet, L; Veres, G I; Voutilainen, M; Wertelaers, P; Zanetti, M; Bertl, W; Deiters, K; Erdmann, W; Gabathuler, K; Horisberger, R; Ingram, Q; Kaestli, H C; König, S; Kotlinski, D; Langenegger, U; Meier, F; Renker, D; Rohe, T; Sibille, J; Starodumov, A; Betev, B; Caminada, L; Chen, Z; Cittolin, S; Da Silva Di Calafiori, D R; Dambach, S; Dissertori, G; Dittmar, M; Eggel, C; Eugster, J; Faber, G; Freudenreich, K; Grab, C; Hervé, A; Hintz, W; Lecomte, P; Luckey, P D; Lustermann, W; Marchica, C; Milenovic, P; Moortgat, F; Nardulli, A; Nessi-Tedaldi, F; Pape, L; Pauss, F; Punz, T; Rizzi, A; Ronga, F J; Sala, L; Sanchez, A K; Sawley, M C; Sordini, V; Stieger, B; Tauscher, L; Thea, A; Theofilatos, K; Treille, D; Trüb, P; Weber, M; Wehrli, L; Weng, J; Zelepoukine, S; Amsler, C; Chiochia, V; De Visscher, S; Regenfus, C; Robmann, P; Rommerskirchen, T; Schmidt, A; Tsirigkas, D; Wilke, L; Chang, Y H; Chen, E A; Chen, W T; Go, A; Kuo, C M; Li, S W; Lin, W; Bartalini, P; Chang, P; Chao, Y; Chen, K F; Hou, W S; Hsiung, Y; Lei, Y J; Lin, S W; Lu, R S; Schümann, J; Shiu, J G; Tzeng, Y M; Ueno, K; Velikzhanin, Y; Wang, C C; Wang, M; Adiguzel, A; Ayhan, A; Azman Gokce, A; Bakirci, M N; Cerci, S; Dumanoglu, I; Eskut, E; Girgis, S; Gurpinar, E; Hos, I; Karaman, T; Kayis Topaksu, A; Kurt, P; Önengüt, G; Önengüt Gökbulut, G; Ozdemir, K; Ozturk, S; Polatöz, A; Sogut, K; Tali, B; Topakli, H; Uzun, D; Vergili, L N; Vergili, M; Akin, I V; Aliev, T; Bilmis, S; Deniz, M; Gamsizkan, H; Guler, A M; Öcalan, K; Serin, M; Sever, R; Surat, U E; Zeyrek, M; Deliomeroglu, M; Demir, D; Gülmez, E; Halu, A; Isildak, B; Kaya, M; Kaya, O; Ozkorucuklu, S; Sonmez, N; Levchuk, L; Lukyanenko, S; Soroka, D; Zub, S; Bostock, F; Brooke, J J; Cheng, T L; Cussans, D; Frazier, R; Goldstein, J; Grant, N; Hansen, M; Heath, G P; Heath, H F; Hill, C; Huckvale, B; Jackson, J; Mackay, C K; Metson, S; Newbold, D M; Nirunpong, K; Smith, V J; Velthuis, J; Walton, R; Bell, K W; Brew, C; Brown, R M; Camanzi, B; Cockerill, D J A; Coughlan, J A; Geddes, N I; Harder, K; Harper, S; Kennedy, B W; Murray, P; Shepherd-Themistocleous, C H; Tomalin, I R; Williams, J H; Womersley, W J; Worm, S D; Bainbridge, R; Ball, G; Ballin, J; Beuselinck, R; Buchmuller, O; Colling, D; Cripps, N; Davies, G; Della Negra, M; Foudas, C; Fulcher, J; Futyan, D; Hall, G; Hays, J; Iles, G; Karapostoli, G; MacEvoy, B C; Magnan, A M; Marrouche, J; Nash, J; Nikitenko, A; Papageorgiou, A; Pesaresi, M; Petridis, K; Pioppi, M; Raymond, D M; Rompotis, N; Rose, A; Ryan, M J; Seez, C; Sharp, P; Sidiropoulos, G; Stettler, M; Stoye, M; Takahashi, M; Tapper, A; Timlin, C; Tourneur, S; Vazquez Acosta, M; Virdee, T; Wakefield, S; Wardrope, D; Whyntie, T; Wingham, M; Cole, J E; Goitom, I; Hobson, P R; Khan, A; Kyberd, P; Leslie, D; Munro, C; Reid, I D; Siamitros, C; Taylor, R; Teodorescu, L; Yaselli, I; Bose, T; Carleton, M; Hazen, E; Heering, A H; Heister, A; John, J St; Lawson, P; Lazic, D; Osborne, D; Rohlf, J; Sulak, L; Wu, S; Andrea, J; Avetisyan, A; Bhattacharya, S; Chou, J P; Cutts, D; Esen, S; Kukartsev, G; Landsberg, G; Narain, M; Nguyen, D; Speer, T; Tsang, K V; Breedon, R; Calderon De La Barca Sanchez, M; Case, M; Cebra, D; Chertok, M; Conway, J; Cox, P T; Dolen, J; Erbacher, R; Friis, E; Ko, W; Kopecky, A; Lander, R; Lister, A; Liu, H; Maruyama, S; Miceli, T; Nikolic, M; Pellett, D; Robles, J; Searle, M; Smith, J; Squires, M; Stilley, J; Tripathi, M; Vasquez Sierra, R; Veelken, C; Andreev, V; Arisaka, K; Cline, D; Cousins, R; Erhan, S; Hauser, J; Ignatenko, M; Jarvis, C; Mumford, J; Plager, C; Rakness, G; Schlein, P; Tucker, J; Valuev, V; Wallny, R; Yang, X; Babb, J; Bose, M; Chandra, A; Clare, R; Ellison, J A; Gary, J W; Hanson, G; Jeng, G Y; Kao, S C; Liu, F; Liu, H; Luthra, A; Nguyen, H; Pasztor, G; Satpathy, A; Shen, B C; Stringer, R; Sturdy, J; Sytnik, V; Wilken, R; Wimpenny, S; Branson, J G; Dusinberre, E; Evans, D; Golf, F; Kelley, R; Lebourgeois, M; Letts, J; Lipeles, E; Mangano, B; Muelmenstaedt, J; Norman, M; Padhi, S; Petrucci, A; Pi, H; Pieri, M; Ranieri, R; Sani, M; Sharma, V; Simon, S; Würthwein, F; Yagil, A; Campagnari, C; D'Alfonso, M; Danielson, T; Garberson, J; Incandela, J; Justus, C; Kalavase, P; Koay, S A; Kovalskyi, D; Krutelyov, V; Lamb, J; Lowette, S; Pavlunin, V; Rebassoo, F; Ribnik, J; Richman, J; Rossin, R; Stuart, D; To, W; Vlimant, J R; Witherell, M; Apresyan, A; Bornheim, A; Bunn, J; Chiorboli, M; Gataullin, M; Kcira, D; Litvine, V; Ma, Y; Newman, H B; Rogan, C; Timciuc, V; Veverka, J; Wilkinson, R; Yang, Y; Zhang, L; Zhu, K; Zhu, R Y; Akgun, B; Carroll, R; Ferguson, T; Jang, D W; Jun, S Y; Paulini, M; Russ, J; Terentyev, N; Vogel, H; Vorobiev, I; Cumalat, J P; Dinardo, M E; Drell, B R; Ford, W T; Heyburn, B; Luiggi Lopez, E; Nauenberg, U; Stenson, K; Ulmer, K; Wagner, S R; Zang, S L; Agostino, L; Alexander, J; Blekman, F; Cassel, D; Chatterjee, A; Das, S; Gibbons, L K; Heltsley, B; Hopkins, W; Khukhunaishvili, A; Kreis, B; Kuznetsov, V; Patterson, J R; Puigh, D; Ryd, A; Shi, X; Stroiney, S; Sun, W; Teo, W D; Thom, J; Vaughan, J; Weng, Y; Wittich, P; Beetz, C P; Cirino, G; Sanzeni, C; Winn, D; Abdullin, S; Afaq, M A; Albrow, M; Ananthan, B; Apollinari, G; Atac, M; Badgett, W; Bagby, L; Bakken, J A; Baldin, B; Banerjee, S; Banicz, K; Bauerdick, L A T; Beretvas, A; Berryhill, J; Bhat, P C; Biery, K; Binkley, M; Bloch, I; Borcherding, F; Brett, A M; Burkett, K; Butler, J N; Chetluru, V; Cheung, H W K; Chlebana, F; Churin, I; Cihangir, S; Crawford, M; Dagenhart, W; Demarteau, M; Derylo, G; Dykstra, D; Eartly, D P; Elias, J E; Elvira, V D; Evans, D; Feng, L; Fischler, M; Fisk, I; Foulkes, S; Freeman, J; Gartung, P; Gottschalk, E; Grassi, T; Green, D; Guo, Y; Gutsche, O; Hahn, A; Hanlon, J; Harris, R M; Holzman, B; Howell, J; Hufnagel, D; James, E; Jensen, H; Johnson, M; Jones, C D; Joshi, U; Juska, E; Kaiser, J; Klima, B; Kossiakov, S; Kousouris, K; Kwan, S; Lei, C M; Limon, P; Lopez Perez, J A; Los, S; Lueking, L; Lukhanin, G; Lusin, S; Lykken, J; Maeshima, K; Marraffino, J M; Mason, D; McBride, P; Miao, T; Mishra, K; Moccia, S; Mommsen, R; Mrenna, S; Muhammad, A S; Newman-Holmes, C; Noeding, C; O'Dell, V; Prokofyev, O; Rivera, R; Rivetta, C H; Ronzhin, A; Rossman, P; Ryu, S; Sekhri, V; Sexton-Kennedy, E; Sfiligoi, I; Sharma, S; Shaw, T M; Shpakov, D; Skup, E; Smith, R P; Soha, A; Spalding, W J; Spiegel, L; Suzuki, I; Tan, P; Tanenbaum, W; Tkaczyk, S; Trentadue, R; Uplegger, L; Vaandering, E W; Vidal, R; Whitmore, J; Wicklund, E; Wu, W; Yarba, J; Yumiceva, F; Yun, J C; Acosta, D; Avery, P; Barashko, V; Bourilkov, D; Chen, M; Di Giovanni, G P; Dobur, D; Drozdetskiy, A; Field, R D; Fu, Y; Furic, I K; Gartner, J; Holmes, D; Kim, B; Klimenko, S; Konigsberg, J; Korytov, A; Kotov, K; Kropivnitskaya, A; Kypreos, T; Madorsky, A; Matchev, K; Mitselmakher, G; Pakhotin, Y; Piedra Gomez, J; Prescott, C; Rapsevicius, V; Remington, R; Schmitt, M; Scurlock, B; Wang, D; Yelton, J; Ceron, C; Gaultney, V; Kramer, L; Lebolo, L M; Linn, S; Markowitz, P; Martinez, G; Rodriguez, J L; Adams, T; Askew, A; Baer, H; Bertoldi, M; Chen, J; Dharmaratna, W G D; Gleyzer, S V; Haas, J; Hagopian, S; Hagopian, V; Jenkins, M; Johnson, K F; Prettner, E; Prosper, H; Sekmen, S; Baarmand, M M; Guragain, S; Hohlmann, M; Kalakhety, H; Mermerkaya, H; Ralich, R; Vodopiyanov, I; Abelev, B; Adams, M R; Anghel, I M; Apanasevich, L; Bazterra, V E; Betts, R R; Callner, J; Castro, M A; Cavanaugh, R; Dragoiu, C; Garcia-Solis, E J; Gerber, C E; Hofman, D J; Khalatian, S; Mironov, C; Shabalina, E; Smoron, A; Varelas, N; Akgun, U; Albayrak, E A; Ayan, A S; Bilki, B; Briggs, R; Cankocak, K; Chung, K; Clarida, W; Debbins, P; Duru, F; Ingram, F D; Lae, C K; McCliment, E; Merlo, J P; Mestvirishvili, A; Miller, M J; Moeller, A; Nachtman, J; Newsom, C R; Norbeck, E; Olson, J; Onel, Y; Ozok, F; Parsons, J; Schmidt, I; Sen, S; Wetzel, J; Yetkin, T; Yi, K; Barnett, B A; Blumenfeld, B; Bonato, A; Chien, C Y; Fehling, D; Giurgiu, G; Gritsan, A V; Guo, Z J; Maksimovic, P; Rappoccio, S; Swartz, M; Tran, N V; Zhang, Y; Baringer, P; Bean, A; Grachov, O; Murray, M; Radicci, V; Sanders, S; Wood, J S; Zhukova, V; Bandurin, D; Bolton, T; Kaadze, K; Liu, A; Maravin, Y; Onoprienko, D; Svintradze, I; Wan, Z; Gronberg, J; Hollar, J; Lange, D; Wright, D; Baden, D; Bard, R; Boutemeur, M; Eno, S C; Ferencek, D; Hadley, N J; Kellogg, R G; Kirn, M; Kunori, S; Rossato, K; Rumerio, P; Santanastasio, F; Skuja, A; Temple, J; Tonjes, M B; Tonwar, S C; Toole, T; Twedt, E; Alver, B; Bauer, G; Bendavid, J; Busza, W; Butz, E; Cali, I A; Chan, M; D'Enterria, D; Everaerts, P; Gomez Ceballos, G; Hahn, K A; Harris, P; Jaditz, S; Kim, Y; Klute, M; Lee, Y J; Li, W; Loizides, C; Ma, T; Miller, M; Nahn, S; Paus, C; Roland, C; Roland, G; Rudolph, M; Stephans, G; Sumorok, K; Sung, K; Vaurynovich, S; Wenger, E A; Wyslouch, B; Xie, S; Yilmaz, Y; Yoon, A S; Bailleux, D; Cooper, S I; Cushman, P; Dahmes, B; De Benedetti, A; Dolgopolov, A; Dudero, P R; Egeland, R; Franzoni, G; Haupt, J; Inyakin, A; Klapoetke, K; Kubota, Y; Mans, J; Mirman, N; Petyt, D; Rekovic, V; Rusack, R; Schroeder, M; Singovsky, A; Zhang, J; Cremaldi, L M; Godang, R; Kroeger, R; Perera, L; Rahmat, R; Sanders, D A; Sonnek, P; Summers, D; Bloom, K; Bockelman, B; Bose, S; Butt, J; Claes, D R; Dominguez, A; Eads, M; Keller, J; Kelly, T; Kravchenko, I; Lazo-Flores, J; Lundstedt, C; Malbouisson, H; Malik, S; Snow, G R; Baur, U; Iashvili, I; Kharchilava, A; Kumar, A; Smith, K; Strang, M; Alverson, G; Barberis, E; Boeriu, O; Eulisse, G; Govi, G; McCauley, T; Musienko, Y; Muzaffar, S; Osborne, I; Paul, T; Reucroft, S; Swain, J; Taylor, L; Tuura, L; Anastassov, A; Gobbi, B; Kubik, A; Ofierzynski, R A; Pozdnyakov, A; Schmitt, M; Stoynev, S; Velasco, M; Won, S; Antonelli, L; Berry, D; Hildreth, M; Jessop, C; Karmgard, D J; Kolberg, T; Lannon, K; Lynch, S; Marinelli, N; Morse, D M; Ruchti, R; Slaunwhite, J; Warchol, J; Wayne, M; Bylsma, B; Durkin, L S; Gilmore, J; Gu, J; Killewald, P; Ling, T Y; Williams, G; Adam, N; Berry, E; Elmer, P; Garmash, A; Gerbaudo, D; Halyo, V; Hunt, A; Jones, J; Laird, E; Marlow, D; Medvedeva, T; Mooney, M; Olsen, J; Piroué, P; Stickland, D; Tully, C; Werner, J S; Wildish, T; Xie, Z; Zuranski, A; Acosta, J G; Bonnett Del Alamo, M; Huang, X T; Lopez, A; Mendez, H; Oliveros, S; Ramirez Vargas, J E; Santacruz, N; Zatzerklyany, A; Alagoz, E; Antillon, E; Barnes, V E; Bolla, G; Bortoletto, D; Everett, A; Garfinkel, A F; Gecse, Z; Gutay, L; Ippolito, N; Jones, M; Koybasi, O; Laasanen, A T; Leonardo, N; Liu, C; Maroussov, V; Merkel, P; Miller, D H; Neumeister, N; Sedov, A; Shipsey, I; Yoo, H D; Zheng, Y; Jindal, P; Parashar, N; Cuplov, V; Ecklund, K M; Geurts, F J M; Liu, J H; Maronde, D; Matveev, M; Padley, B P; Redjimi, R; Roberts, J; Sabbatini, L; Tumanov, A; Betchart, B; Bodek, A; Budd, H; Chung, Y S; de Barbaro, P; Demina, R; Flacher, H; Gotra, Y; Harel, A; Korjenevski, S; Miner, D C; Orbaker, D; Petrillo, G; Vishnevskiy, D; Zielinski, M; Bhatti, A; Demortier, L; Goulianos, K; Hatakeyama, K; Lungu, G; Mesropian, C; Yan, M; Atramentov, O; Bartz, E; Gershtein, Y; Halkiadakis, E; Hits, D; Lath, A; Rose, K; Schnetzer, S; Somalwar, S; Stone, R; Thomas, S; Watts, T L; Cerizza, G; Hollingsworth, M; Spanier, S; Yang, Z C; York, A; Asaadi, J; Aurisano, A; Eusebi, R; Golyash, A; Gurrola, A; Kamon, T; Nguyen, C N; Pivarski, J; Safonov, A; Sengupta, S; Toback, D; Weinberger, M; Akchurin, N; Berntzon, L; Gumus, K; Jeong, C; Kim, H; Lee, S W; Popescu, S; Roh, Y; Sill, A; Volobouev, I; Washington, E; Wigmans, R; Yazgan, E; Engh, D; Florez, C; Johns, W; Pathak, S; Sheldon, P; Andelin, D; Arenton, M W; Balazs, M; Boutle, S; Buehler, M; Conetti, S; Cox, B; Hirosky, R; Ledovskoy, A; Neu, C; Phillips II, D; Ronquest, M; Yohay, R; Gollapinni, S; Gunthoti, K; Harr, R; Karchin, P E; Mattson, M; Sakharov, A; Anderson, M; Bachtis, M; Bellinger, J N; Carlsmith, D; Crotty, I; Dasu, S; Dutta, S; Efron, J; Feyzi, F; Flood, K; Gray, L; Grogg, K S; Grothe, M; Hall-Wilton, R; Jaworski, M; Klabbers, P; Klukas, J; Lanaro, A; Lazaridis, C; Leonard, J; Loveless, R; Magrans de Abril, M; Mohapatra, A; Ott, G; Polese, G; Reeder, D; Savin, A; Smith, W H; Sourkov, A; Swanson, J; Weinberg, M; Wenman, D; Wensveen, M; White, A

    2010-01-01

    The CMS High-Level Trigger (HLT) is responsible for ensuring that data samples with potentially interesting events are recorded with high efficiency and good quality. This paper gives an overview of the HLT and focuses on its commissioning using cosmic rays. The selection of triggers that were deployed is presented and the online grouping of triggered events into streams and primary datasets is discussed. Tools for online and offline data quality monitoring for the HLT are described, and the operational performance of the muon HLT algorithms is reviewed. The average time taken for the HLT selection and its dependence on detector and operating conditions are presented. The HLT performed reliably and helped provide a large dataset. This dataset has proven to be invaluable for understanding the performance of the trigger and the CMS experiment as a whole.

  9. Commissioning of the CMS High-Level Trigger with cosmic rays

    International Nuclear Information System (INIS)

    2010-01-01

    The CMS High-Level Trigger (HLT) is responsible for ensuring that data samples with potentially interesting events are recorded with high efficiency and good quality. This paper gives an overview of the HLT and focuses on its commissioning using cosmic rays. The selection of triggers that were deployed is presented and the online grouping of triggered events into streams and primary datasets is discussed. Tools for online and offline data quality monitoring for the HLT are described, and the operational performance of the muon HLT algorithms is reviewed. The average time taken for the HLT selection and its dependence on detector and operating conditions are presented. The HLT performed reliably and helped provide a large dataset. This dataset has proven to be invaluable for understanding the performance of the trigger and the CMS experiment as a whole.

  10. High reliability low jitter 80 kV pulse generator

    Directory of Open Access Journals (Sweden)

    M. E. Savage

    2009-08-01

    Full Text Available Switching can be considered to be the essence of pulsed power. Time accurate switch/trigger systems with low inductance are useful in many applications. This article describes a unique switch geometry coupled with a low-inductance capacitive energy store. The system provides a fast-rising high voltage pulse into a low impedance load. It can be challenging to generate high voltage (more than 50 kilovolts into impedances less than 10  Ω, from a low voltage control signal with a fast rise time and high temporal accuracy. The required power amplification is large, and is usually accomplished with multiple stages. The multiple stages can adversely affect the temporal accuracy and the reliability of the system. In the present application, a highly reliable and low jitter trigger generator was required for the Z pulsed-power facility [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats,J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, and J. R. Woodworth, 2007 IEEE Pulsed Power Conference, Albuquerque, NM (IEEE, Piscataway, NJ, 2007, p. 979]. The large investment in each Z experiment demands low prefire probability and low jitter simultaneously. The system described here is based on a 100 kV DC-charged high-pressure spark gap, triggered with an ultraviolet laser. The system uses a single optical path for simultaneously triggering two parallel switches, allowing lower inductance and electrode erosion with a simple optical system. Performance of the system includes 6 ns output rise time into 5.6  Ω, 550 ps one-sigma jitter measured from the 5 V trigger to the high voltage output, and misfire probability less than 10^{-4}. The design of the system and some key measurements will be shown in the paper. We will discuss the

  11. Triggering at high luminosity: fake triggers from pile-up

    International Nuclear Information System (INIS)

    Johnson, R.

    1983-01-01

    Triggers based on a cut in transverse momentum (p/sub t/) have proved to be useful in high energy physics both because they indicte that a hard constituent scattering has occurred and because they can be made quickly enough to gate electronics. These triggers will continue to be useful at high luminosities if overlapping events do not cause an excessive number of fake triggers. In this paper, I determine if this is indeed a problem at high luminosity machines

  12. TRIGGER

    CERN Multimedia

    W. Smith

    2012-01-01

      Level-1 Trigger The Level-1 Trigger group is ready to deploy improvements to the L1 Trigger algorithms for 2012. These include new high-PT patterns for the RPC endcap, an improved CSC PT assignment, a new PT-matching algorithm for the Global Muon Trigger, and new calibrations for ECAL, HCAL, and the Regional Calorimeter Trigger. These should improve the efficiency, rate, and stability of the L1 Trigger. The L1 Trigger group also is migrating the online systems to SLC5. To make the data transfer from the Global Calorimeter Trigger to the Global Trigger more reliable and also to allow checking the data integrity online, a new optical link system has been developed by the GCT and GT groups and successfully tested at the CMS electronics integration facility in building 904. This new system is now undergoing further tests at Point 5 before being deployed for data-taking this year. New L1 trigger menus have recently been studied and proposed by Emmanuelle Perez and the L1 Detector Performance Group...

  13. Inter- and Intraexaminer Reliability in Identifying and Classifying Myofascial Trigger Points in Shoulder Muscles.

    Science.gov (United States)

    Nascimento, José Diego Sales do; Alburquerque-Sendín, Francisco; Vigolvino, Lorena Passos; Oliveira, Wandemberg Fortunato de; Sousa, Catarina de Oliveira

    2018-01-01

    To determine inter- and intraexaminer reliability of examiners without clinical experience in identifying and classifying myofascial trigger points (MTPs) in the shoulder muscles of subjects asymptomatic and symptomatic for unilateral subacromial impact syndrome (SIS). Within-day inter- and intraexaminer reliability study. Physical therapy department of a university. Fifty-two subjects participated in the study, 26 symptomatic and 26 asymptomatic for unilateral SIS. Two examiners, without experience for assessing MTPs, independent and blind to the clinical conditions of the subjects, assessed bilaterally the presence of MTPs (present or absent) in 6 shoulder muscles and classified them (latent or active) on the affected side of the symptomatic group. Each examiner performed the same assessment twice in the same day. Reliability was calculated through percentage agreement, prevalence- and bias-adjusted kappa (PABAK) statistics, and weighted kappa. Intraexaminer reliability in identifying MTPs for the symptomatic and asymptomatic groups was moderate to perfect (PABAK, .46-1 and .60-1, respectively). Interexaminer reliability was between moderate and almost perfect in the 2 groups (PABAK, .46-.92), except for the muscles of the symptomatic group, which were below these values. With respect to MTP classification, intraexaminer reliability was moderate to high for most muscles, but interexaminer reliability was moderate for only 1 muscle (weighted κ=.45), and between weak and reasonable for the rest (weighted κ=.06-.31). Intraexaminer reliability is acceptable in clinical practice to identify and classify MTPs. However, interexaminer reliability proved to be reliable only to identify MTPs, with the symptomatic side exhibiting lower values of reliability. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  14. Efficient, reliable and fast high-level triggering using a bonsai boosted decision tree

    International Nuclear Information System (INIS)

    Gligorov, V V; Williams, M

    2013-01-01

    High-level triggering is a vital component of many modern particle physics experiments. This paper describes a modification to the standard boosted decision tree (BDT) classifier, the so-called bonsai BDT, that has the following important properties: it is more efficient than traditional cut-based approaches; it is robust against detector instabilities, and it is very fast. Thus, it is fit-for-purpose for the online running conditions faced by any large-scale data acquisition system.

  15. Reliability analysis of multi-trigger binary systems subject to competing failures

    International Nuclear Information System (INIS)

    Wang, Chaonan; Xing, Liudong; Levitin, Gregory

    2013-01-01

    This paper suggests two combinatorial algorithms for the reliability analysis of multi-trigger binary systems subject to competing failure propagation and failure isolation effects. Propagated failure with global effect (PFGE) is referred to as a failure that not only causes outage to the component from which the failure originates, but also propagates through all other system components causing the entire system failure. However, the propagation effect from the PFGE can be isolated in systems with functional dependence (FDEP) behavior. This paper studies two distinct consequences of PFGE resulting from a competition in the time domain between the failure isolation and failure propagation effects. As compared to existing works on competing failures that are limited to systems with a single FDEP group, this paper considers more complicated cases where the systems have multiple dependent FDEP groups. Analysis of such systems is more challenging because both the occurrence order between the trigger failure event and PFGE from the dependent components and the occurrence order among the multiple trigger failure events have to be considered. Two combinatorial and analytical algorithms are proposed. Both of them have no limitation on the type of time-to-failure distributions for the system components. Their correctness is verified using a Markov-based method. An example of memory systems is analyzed to demonstrate and compare the applications and advantages of the two proposed algorithms. - Highlights: ► Reliability of binary systems with multiple dependent functional dependence groups is analyzed. ► Competing failure propagation and failure isolation effect is considered. ► The proposed algorithms are combinatorial and applicable to any arbitrary type of time-to-failure distributions for system components.

  16. High-voltage high-current triggering vacuum switch

    International Nuclear Information System (INIS)

    Alferov, D.F.; Bunin, R.A.; Evsin, D.V.; Sidorov, V.A.

    2012-01-01

    Experimental investigations of switching and breaking capacities of the new high current triggered vacuum switch (TVS) are carried out at various parameters of discharge current. It has been shown that the high current triggered vacuum switch TVS can switch repeatedly a current from units up to ten kiloampers with duration up to ten millisecond [ru

  17. High energy physics experiment triggers and the trustworthiness of software

    International Nuclear Information System (INIS)

    Nash, T.

    1991-10-01

    For all the time and frustration that high energy physicists expend interacting with computers, it is surprising that more attention is not paid to the critical role computers play in the science. With large, expensive colliding beam experiments now dependent on complex programs working at startup, questions of reliability -- the trustworthiness of software -- need to be addressed. This issue is most acute in triggers, used to select data to record -- and data to discard -- in the real time environment of an experiment. High level triggers are built on codes that now exceed 2 million source lines -- and for the first time experiments are truly dependent on them. This dependency will increase at the accelerators planned for the new millennium (SSC and LHC), where cost and other pressures will reduce tolerance for first run problems, and the high luminosities will make this on-line data selection essential. A sense of this incipient crisis motivated the unusual juxtaposition to topics in these lectures. 37 refs., 1 fig

  18. TRIGGER

    CERN Multimedia

    R. Arcidiacono

    2013-01-01

      In 2013 the Trigger Studies Group (TSG) has been restructured in three sub-groups: STEAM, for the development of new HLT menus and monitoring their performance; STORM, for the development of HLT tools, code and actual configurations; and FOG, responsible for the online operations of the High Level Trigger. The Strategy for Trigger Evolution And Monitoring (STEAM) group is responsible for Trigger Menu development, path timing, trigger performance studies coordination, HLT offline DQM as well as HLT release, menu and conditions validation – in collaboration and with the technical support of the PdmV group. Since the end of proton-proton data taking, the group has started preparing for 2015 data taking, with collisions at 13 TeV and 25 ns bunch spacing. The reliability of the extrapolation to higher energy is being evaluated comparing the trigger rates on 7 and 8 TeV Monte Carlo samples with the data taken in the past two years. The effect of 25 ns bunch spacing is being studied on the d...

  19. Triggers for a high sensitivity charm experiment

    International Nuclear Information System (INIS)

    Christian, D.C.

    1994-07-01

    Any future charm experiment clearly should implement an E T trigger and a μ trigger. In order to reach the 10 8 reconstructed charm level for hadronic final states, a high quality vertex trigger will almost certainly also be necessary. The best hope for the development of an offline quality vertex trigger lies in further development of the ideas of data-driven processing pioneered by the Nevis/U. Mass. group

  20. Validity and Reliability of Clinical Examination in the Diagnosis of Myofascial Pain Syndrome and Myofascial Trigger Points in Upper Quarter Muscles.

    Science.gov (United States)

    Mayoral Del Moral, Orlando; Torres Lacomba, María; Russell, I Jon; Sánchez Méndez, Óscar; Sánchez Sánchez, Beatriz

    2017-12-15

    To determine whether two independent examiners can agree on a diagnosis of myofascial pain syndrome (MPS). To evaluate interexaminer reliability in identifying myofascial trigger points in upper quarter muscles. To evaluate the reliability of clinical diagnostic criteria for the diagnosis of MPS. To evaluate the validity of clinical diagnostic criteria for the diagnosis of MPS. Validity and reliability study. Provincial Hospital. Toledo, Spain. Twenty myofascial pain syndrome patients and 20 healthy, normal control subjects, enrolled by a trained and experienced examiner. Ten bilateral muscles from the upper quarter were evaluated by two experienced examiners. The second examiner was blinded to the diagnosis group. The MPS diagnosis required at least one muscle to have an active myofascial trigger point. Three to four days separated the two examinations. The primary outcome measure was the frequency with which the two examiners agreed on the classification of the subjects as patients or as healthy controls. The kappa statistic (K) was used to determine the level of agreement between both examinations, interpreted as very good (0.81-1.00), good (0.61-0.80), moderate (0.41-0.60), fair (0.21-0.40), or poor (≤0.20). Interexaminer reliability for identifying subjects with MPS was very good (K = 1.0). Interexaminer reliability for identifying muscles leading to a diagnosis of MPS was also very good (K = 0.81). Sensitivity and specificity showed high values for most examination tests in all muscles, which confirms the validity of clinical diagnostic criteria in the diagnosis of MPS. Interrater reliability between two expert examiners identifying subjects with MPS involving upper quarter muscles exhibited substantial agreement. These results suggest that clinical criteria can be valid and reliable in the diagnosis of this condition. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  1. The ATLAS High-Level Calorimeter Trigger in Run-2

    CERN Document Server

    Wiglesworth, Craig; The ATLAS collaboration

    2018-01-01

    The ATLAS Experiment uses a two-level triggering system to identify and record collision events containing a wide variety of physics signatures. It reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of 1 kHz, whilst maintaining high efficiency for interesting collision events. It is composed of an initial hardware-based level-1 trigger followed by a software-based high-level trigger. A central component of the high-level trigger is the calorimeter trigger. This is responsible for processing data from the electromagnetic and hadronic calorimeters in order to identify electrons, photons, taus, jets and missing transverse energy. In this talk I will present the performance of the high-level calorimeter trigger in Run-2, noting the improvements that have been made in response to the challenges of operating at high luminosity.

  2. The ATLAS High Level Trigger Steering Framework and the Trigger 
Configuration System.

    CERN Document Server

    Pérez Cavalcanti, Tiago; The ATLAS collaboration

    2011-01-01

    The ATLAS High Level Trigger Steering Framework and the Trigger 
Configuration System.
 
The ATLAS detector system installed in the Large Hadron Collider (LHC) 
at CERN is designed to study proton-proton and nucleus-nucleus 
collisions with a maximum center of mass energy of 14 TeV at a bunch 
collision rate of 40MHz.  In March 2010 the four LHC experiments saw 
the first proton-proton collisions at 7 TeV. Still within the year a 
collision rate of nearly 10 MHz is expected. At ATLAS, events of 
potential interest for ATLAS physics are selected by a three-level 
trigger system, with a final recording rate of about 200 Hz. The first 
level (L1) is implemented in custom hardware; the two levels of 
the high level trigger (HLT) are software triggers, running on large 
farms of standard computers and network devices. 

Within the ATLAS physics program more than 500 trigger signatures are 
defined. The HLT tests each signature on each L1-accepted event; the 
test outcome is recor...

  3. ALICE High Level Trigger

    CERN Multimedia

    Alt, T

    2013-01-01

    The ALICE High Level Trigger (HLT) is a computing farm designed and build for the real-time, online processing of the raw data produced by the ALICE detectors. Events are fully reconstructed from the raw data, analyzed and compressed. The analysis summary together with the compressed data and a trigger decision is sent to the DAQ. In addition the reconstruction of the events allows for on-line monitoring of physical observables and this information is provided to the Data Quality Monitor (DQM). The HLT can process event rates of up to 2 kHz for proton-proton and 200 Hz for Pb-Pb central collisions.

  4. Dedicated Trigger for Highly Ionising Particles at ATLAS

    CERN Document Server

    Katre, Akshay; The ATLAS collaboration

    2015-01-01

    In 2012, a novel strategy was designed to detect signatures of Highly Ionising Particles (HIPs) such as magnetic monopoles, dyons or Qballs with the ATLAS trigger system. With proton-proton collisions at a centre of mass enegy of 8 TeV, the trigger was designed to have unique properties as a tracker for HIPs. It uses only the Transition Radiation Tracker (TRT) system, applying an algorithm distinct from standard tracking ones. The unique high threshold readout capability of the TRT is used at the location where HIPs in the detector are looked for. In particular the number and the fraction of TRT high threshold hits is used to distinguish HIPs from background processes. The trigger requires significantly lower energy depositions in the electro-magnetic calorimeters as a seed unlike previously used trigger algorithms for such searches. Thus the new trigger is capable of probing a large range of HIP masses and charges. We will give a description of the algorithms for this newly developed trigger for HIP searches...

  5. A Scalable and Reliable Message Transport Service for the ATLAS Trigger and Data Acquisition System

    CERN Document Server

    Kazarov, A; The ATLAS collaboration; Kolos, S; Lehmann Miotto, G; Soloviev, I

    2014-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) is a large distributed computing system composed of several thousands of interconnected computers and tens of thousands applications. During a run, TDAQ applications produce a lot of control and information messages with variable rates, addressed to TDAQ operators or to other applications. Reliable, fast and accurate delivery of the messages is important for the functioning of the whole TDAQ system. The Message Transport Service (MTS) provides facilities for the reliable transport, the filtering and the routing of the messages, basing on publish-subscribe-notify communication pattern with content-based message filtering. During the ongoing LHC shutdown, the MTS was re-implemented, taking into account important requirements like reliability, scalability and performance, handling of slow subscribers case and also simplicity of the design and the implementation. MTS uses CORBA middleware, a common layer for TDAQ infrastructure, and provides sending/subscribing APIs i...

  6. The ATLAS trigger: high-level trigger commissioning and operation during early data taking

    International Nuclear Information System (INIS)

    Goncalo, R

    2008-01-01

    The ATLAS experiment is one of the two general-purpose experiments due to start operation soon at the Large Hadron Collider (LHC). The LHC will collide protons at a centre of mass energy of 14 TeV, with a bunch-crossing rate of 40 MHz. The ATLAS three-level trigger will reduce this input rate to match the foreseen offline storage capability of 100-200 Hz. This paper gives an overview of the ATLAS High Level Trigger focusing on the system design and its innovative features. We then present the ATLAS trigger strategy for the initial phase of LHC exploitation. Finally, we report on the valuable experience acquired through in-situ commissioning of the system where simulated events were used to exercise the trigger chain. In particular we show critical quantities such as event processing times, measured in a large-scale HLT farm using a complex trigger menu

  7. TRIGGER

    CERN Multimedia

    W. Smith

    Level-1 Trigger Hardware and Software The trigger system has been constantly in use in cosmic and commissioning data taking periods. During CRAFT running it delivered 300 million muon and calorimeter triggers to CMS. It has performed stably and reliably. During the abort gaps it has also provided laser and other calibration triggers. Timing issues, namely synchronization and latency issues, have been solved. About half of the Trigger Concentrator Cards for the ECAL Endcap (TCC-EE) are installed, and the firmware is being worked on. The production of the other half has started. The HCAL Trigger and Readout (HTR) card firmware has been updated, and new features such as fast parallel zero-suppression have been included. Repairs of drift tube (DT) trigger mini-crates, optical links and receivers of sector collectors are under way and have been completed on YB0. New firmware for the optical receivers of the theta links to the drift tube track finder is being installed. In parallel, tests with new eta track finde...

  8. The CMS High-Level Trigger

    International Nuclear Information System (INIS)

    Covarelli, R.

    2009-01-01

    At the startup of the LHC, the CMS data acquisition is expected to be able to sustain an event readout rate of up to 100 kHz from the Level-1 trigger. These events will be read into a large processor farm which will run the 'High-Level Trigger'(HLT) selection algorithms and will output a rate of about 150 Hz for permanent data storage. In this report HLT performances are shown for selections based on muons, electrons, photons, jets, missing transverse energy, τ leptons and b quarks: expected efficiencies, background rates and CPU time consumption are reported as well as relaxation criteria foreseen for a LHC startup instantaneous luminosity.

  9. The CMS High-Level Trigger

    CERN Document Server

    Covarelli, Roberto

    2009-01-01

    At the startup of the LHC, the CMS data acquisition is expected to be able to sustain an event readout rate of up to 100 kHz from the Level-1 trigger. These events will be read into a large processor farm which will run the "High-Level Trigger" (HLT) selection algorithms and will output a rate of about 150 Hz for permanent data storage. In this report HLT performances are shown for selections based on muons, electrons, photons, jets, missing transverse energy, tau leptons and b quarks: expected efficiencies, background rates and CPU time consumption are reported as well as relaxation criteria foreseen for a LHC startup instantaneous luminosity.

  10. The CMS High-Level Trigger

    Science.gov (United States)

    Covarelli, R.

    2009-12-01

    At the startup of the LHC, the CMS data acquisition is expected to be able to sustain an event readout rate of up to 100 kHz from the Level-1 trigger. These events will be read into a large processor farm which will run the "High-Level Trigger" (HLT) selection algorithms and will output a rate of about 150 Hz for permanent data storage. In this report HLT performances are shown for selections based on muons, electrons, photons, jets, missing transverse energy, τ leptons and b quarks: expected efficiencies, background rates and CPU time consumption are reported as well as relaxation criteria foreseen for a LHC startup instantaneous luminosity.

  11. The ALICE Dimuon Spectrometer High Level Trigger

    CERN Document Server

    Becker, B; Cicalo, Corrado; Das, Indranil; de Vaux, Gareth; Fearick, Roger; Lindenstruth, Volker; Marras, Davide; Sanyal, Abhijit; Siddhanta, Sabyasachi; Staley, Florent; Steinbeck, Timm; Szostak, Artur; Usai, Gianluca; Vilakazi, Zeblon

    2009-01-01

    The ALICE Dimuon Spectrometer High Level Trigger (dHLT) is an on-line processing stage whose primary function is to select interesting events that contain distinct physics signals from heavy resonance decays such as J/psi and Gamma particles, amidst unwanted background events. It forms part of the High Level Trigger of the ALICE experiment, whose goal is to reduce the large data rate of about 25 GB/s from the ALICE detectors by an order of magnitude, without loosing interesting physics events. The dHLT has been implemented as a software trigger within a high performance and fault tolerant data transportation framework, which is run on a large cluster of commodity compute nodes. To reach the required processing speeds, the system is built as a concurrent system with a hierarchy of processing steps. The main algorithms perform partial event reconstruction, starting with hit reconstruction on the level of the raw data received from the spectrometer. Then a tracking algorithm finds track candidates from the recon...

  12. The ATLAS High Level Trigger Steering Framework and the Trigger Configuration System.

    CERN Document Server

    Perez Cavalcanti, Tiago; The ATLAS collaboration

    2011-01-01

    The ATLAS detector system installed in the Large Hadron Collider (LHC) at CERN is designed to study proton-proton and nucleus-nucleus collisions with a maximum centre of mass energy of 14 TeV at a bunch collision rate of 40MHz. In March 2010 the four LHC experiments saw the first proton-proton collisions at 7 TeV. Still within the year a collision rate of nearly 10 MHz is expected. At ATLAS, events of potential interest for ATLAS physics are selected by a three-level trigger system, with a final recording rate of about 200 Hz. The first level (L1) is implemented in custom hardware; the two levels of the high level trigger (HLT) are software triggers, running on large farms of standard computers and network devices. Within the ATLAS physics program more than 500 trigger signatures are defined. The HLT tests each signature on each L1-accepted event; the test outcome is recorded for later analysis. The HLT-Steering is responsible for this. It foremost ensures the independent test of each signature, guarantying u...

  13. Advanced Functionalities for Highly Reliable Optical Networks

    DEFF Research Database (Denmark)

    An, Yi

    This thesis covers two research topics concerning optical solutions for networks e.g. avionic systems. One is to identify the applications for silicon photonic devices for cost-effective solutions in short-range optical networks. The other one is to realise advanced functionalities in order...... to increase the availability of highly reliable optical networks. A cost-effective transmitter based on a directly modulated laser (DML) using a silicon micro-ring resonator (MRR) to enhance its modulation speed is proposed, analysed and experimentally demonstrated. A modulation speed enhancement from 10 Gbit...... interconnects and network-on-chips. A novel concept of all-optical protection switching scheme is proposed, where fault detection and protection trigger are all implemented in the optical domain. This scheme can provide ultra-fast establishment of the protection path resulting in a minimum loss of data...

  14. Tracking at High Level Trigger in CMS

    CERN Document Server

    Tosi, Mia

    2016-01-01

    The trigger systems of the LHC detectors play a crucial role in determining the physics capabili- ties of the experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a stream- lined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable out- put rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and ...

  15. Test-retest reliability of myofascial trigger point detection in hip and thigh areas.

    Science.gov (United States)

    Rozenfeld, E; Finestone, A S; Moran, U; Damri, E; Kalichman, L

    2017-10-01

    Myofascial trigger points (MTrP's) are a primary source of pain in patients with musculoskeletal disorders. Nevertheless, they are frequently underdiagnosed. Reliable MTrP palpation is the necessary for their diagnosis and treatment. The few studies that have looked for intra-tester reliability of MTrPs detection in upper body, provide preliminary evidence that MTrP palpation is reliable. Reliability tests for MTrP palpation on the lower limb have not yet been performed. To evaluate inter- and intra-tester reliability of MTrP recognition in hip and thigh muscles. Reliability study. 21 patients (15 males and 6 females, mean age 21.1 years) referred to the physical therapy clinic, 10 with knee or hip pain and 11 with pain in an upper limb, low back, shin or ankle. Two experienced physical therapists performed the examinations, blinded to the subjects' identity, medical condition and results of the previous MTrP evaluation. Each subject was evaluated four times, twice by each examiner in a random order. Dichotomous findings included a palpable taut band, tenderness, referred pain, and relevance of referred pain to patient's complaint. Based on these, diagnosis of latent MTrP's or active MTrP's was established. The evaluation was performed on both legs and included a total of 16 locations in the following muscles: rectus femoris (proximal), vastus medialis (middle and distal), vastus lateralis (middle and distal) and gluteus medius (anterior, posterior and distal). Inter- and intra-tester reliability (Cohen's kappa (κ)) values for single sites ranged from -0.25 to 0.77. Median intra-tester reliability was 0.45 and 0.46 for latent and active MTrP's, and median inter-tester reliability was 0.51 and 0.64 for latent and active MTrPs, respectively. The examination of the distal vastus medialis was most reliable for latent and active MTrP's (intra-tester k = 0.27-0.77, inter-tester k = 0.77 and intra-tester k = 0.53-0.72, inter-tester k = 0.72, correspondingly

  16. Multi-threaded algorithms for GPGPU in the ATLAS High Level Trigger

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00212700; The ATLAS collaboration

    2017-01-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significa...

  17. The ATLAS trigger high-level trigger commissioning and operation during early data taking

    CERN Document Server

    Goncalo, R

    2008-01-01

    The ATLAS experiment is one of the two general-purpose experiments due to start operation soon at the Large Hadron Collider (LHC). The LHC will collide protons at a centre of mass energy of 14~TeV, with a bunch-crossing rate of 40~MHz. The ATLAS three-level trigger will reduce this input rate to match the foreseen offline storage capability of 100-200~Hz. After the Level 1 trigger, which is implemented in custom hardware, the High-Level Trigger (HLT) further reduces the rate from up to 100~kHz to the offline storage rate while retaining the most interesting physics. The HLT is implemented in software running in commercially available computer farms and consists of Level 2 and Event Filter. To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection. Data produced during LHC commissioning will be vital for calibrating and aligning sub-detectors, as well as for testing the ATLAS trigger and setting up t...

  18. Multi-threading in the ATLAS High-Level Trigger

    CERN Document Server

    Barton, Adam Edward; The ATLAS collaboration

    2018-01-01

    Over the next decade of LHC data-taking the instantaneous luminosity will reach up 7.5 times the design value with over 200 interactions per bunch-crossing and will pose unprecedented challenges for the ATLAS trigger system. With the evolution of the CPU market to many-core systems, both the ATLAS offline reconstruction and High-Level Trigger (HLT) software will have to transition from a multi-process to a multithreaded processing paradigm in order not to exhaust the available physical memory of a typical compute node. The new multithreaded ATLAS software framework, AthenaMT, has been designed from the ground up to support both the offline and online use-cases with the aim to further harmonize the offline and trigger algorithms. The latter is crucial both in terms of maintenance effort and to guarantee the high trigger efficiency and rejection factors needed for the next two decades of data-taking. We report on an HLT prototype in which the need for HLT­specific components has been reduced to a minimum while...

  19. The ATLAS online High Level Trigger framework experience reusing offline software components in the ATLAS trigger

    CERN Document Server

    Wiedenmann, W

    2009-01-01

    Event selection in the Atlas High Level Trigger is accomplished to a large extent by reusing software components and event selection algorithms developed and tested in an offline environment. Many of these offline software modules are not specifically designed to run in a heavily multi-threaded online data flow environment. The Atlas High Level Trigger (HLT) framework based on the Gaudi and Atlas Athena frameworks, forms the interface layer, which allows the execution of the HLT selection and monitoring code within the online run control and data flow software. While such an approach provides a unified environment for trigger event selection across all of Atlas, it also poses strict requirements on the reused software components in terms of performance, memory usage and stability. Experience of running the HLT selection software in the different environments and especially on large multi-node trigger farms has been gained in several commissioning periods using preloaded Monte Carlo events, in data taking peri...

  20. Global tracker for the ALICE high level trigger

    International Nuclear Information System (INIS)

    Vik, Thomas

    2006-01-01

    This thesis deals with two main topics. The first is the implementation and testing of a Kalman filter algorithm in the HLT (High Level Trigger) reconstruction code. This will perform the global tracking in the HLT, that is merging tracklets and hits from the different sub-detectors in the central barrel detector. The second topic is a trigger mode of the HLT which uses the global tracking of particles through the TRD (Transition Radiation Detector), TPC (Time Projection Chamber) and the ITS (Inner Tracking System): The dielectron trigger. Global tracking: The Kalman filter algorithm has been introduced to the HLT tracking scheme. (Author)

  1. Dedicated Trigger for Highly Ionising Particles at ATLAS

    CERN Document Server

    Katre, Akshay; The ATLAS collaboration

    2015-01-01

    In 2012, a novel strategy was designed to detect signatures of Highly Ionising Particles (HIPs) such as magnetic monopoles, dyons or Q-balls with ATLAS. A dedicated trigger was developed and deployed for proton-proton collisions at a centre of mass energy of 8 TeV. It uses the Transition Radiation Tracker (TRT) system, applying an algorithm distinct from standard tracking ones. The high threshold (HT) readout capability of the TRT is used to distinguish HIPs from other background processes. The trigger requires significantly lower energy depositions in the electromagnetic calorimeters and is thereby capable of probing a larger range of HIP masses and charges. A description of the algorithm for this newly developed trigger is presented, along with a comparitive study of its performance during the 2012 data-taking period with respect to previous efforts.

  2. Data analysis at the CMS level-1 trigger: migrating complex selection algorithms from offline analysis and high-level trigger to the trigger electronics

    CERN Document Server

    Wulz, Claudia

    2017-01-01

    With ever increasing luminosity at the LHC, optimum online data selection is becoming more and more important. While in the case of some experiments (LHCb and ALICE) this task is being completely transferred to computer farms, the others -- ATLAS and CMS -- will not be able to do this in the medium-term future for technological, detector-related reasons. Therefore, these experiments pursue the complementary approach of migrating more and more of the offline and high-level trigger intelligence into the trigger electronics. The presentation illustrates how the level-1 trigger of the CMS experiment and in particular its concluding stage, the so-called ``Global Trigger", take up this challenge.

  3. Performance of the CMS High Level Trigger

    CERN Document Server

    Perrotta, Andrea

    2015-01-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increases in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. The increase in the number of interactions per bunch crossing, on average 25 in 2012, and expected to be around 40 in Run II, will be an additional complication. We present here the expected performance of the main triggers that will be used during the 2015 data taking campaign, paying particular attention to the new approaches that have been developed to cope with the challenges of the new run. This includes improvements in HLT electron and photon reconstruction as well as better performing muon triggers. We will also present the performance of the improved trac...

  4. The ATLAS online High Level Trigger framework: Experience reusing offline software components in the ATLAS trigger

    International Nuclear Information System (INIS)

    Wiedenmann, Werner

    2010-01-01

    Event selection in the ATLAS High Level Trigger is accomplished to a large extent by reusing software components and event selection algorithms developed and tested in an offline environment. Many of these offline software modules are not specifically designed to run in a heavily multi-threaded online data flow environment. The ATLAS High Level Trigger (HLT) framework based on the GAUDI and ATLAS ATHENA frameworks, forms the interface layer, which allows the execution of the HLT selection and monitoring code within the online run control and data flow software. While such an approach provides a unified environment for trigger event selection across all of ATLAS, it also poses strict requirements on the reused software components in terms of performance, memory usage and stability. Experience of running the HLT selection software in the different environments and especially on large multi-node trigger farms has been gained in several commissioning periods using preloaded Monte Carlo events, in data taking periods with cosmic events and in a short period with proton beams from LHC. The contribution discusses the architectural aspects of the HLT framework, its performance and its software environment within the ATLAS computing, trigger and data flow projects. Emphasis is also put on the architectural implications for the software by the use of multi-core processors in the computing farms and the experiences gained with multi-threading and multi-process technologies.

  5. Supervision of the ATLAS High Level Trigger System

    CERN Document Server

    Wheeler, S.; Meessen, C.; Qian, Z.; Touchard, F.; Negri, France A.; Zobernig, H.; CHEP 2003 Computing in High Energy Physics; Negri, France A.

    2003-01-01

    The ATLAS High Level Trigger (HLT) system provides software-based event selection after the initial LVL1 hardware trigger. It is composed of two stages, the LVL2 trigger and the Event Filter. The HLT is implemented as software tasks running on large processor farms. An essential part of the HLT is the supervision system, which is responsible for configuring, coordinating, controlling and monitoring the many hundreds of processes running in the HLT. A prototype implementation of the supervision system, using tools from the ATLAS Online Software system is presented. Results from scalability tests are also presented where the supervision system was shown to be capable of controlling over 1000 HLT processes running on 230 nodes.

  6. Transmission line transformer for reliable and low-jitter triggering of a railgap switch.

    Science.gov (United States)

    Verma, Rishi; Mishra, Ekansh; Sagar, Karuna; Meena, Manraj; Shyam, Anurag

    2014-09-01

    The performance of railgap switch critically relies upon multichannel breakdown between the extended electrodes (rails) in order to ensure distributed current transfer along electrode length and to minimize the switch inductance. The initiation of several simultaneous arc channels along the switch length depends on the gap triggering technique and on the rate at which the electric field changes within the gap. This paper presents design, construction, and output characteristics of a coaxial cable based three-stage transmission line transformer (TLT) that is capable of initiating multichannel breakdown in a high voltage, low inductance railgap switch. In each stage three identical lengths of URM67 coaxial cables have been used in parallel and they have been wounded in separate cassettes to enhance the isolation of the output of transformer from the input. The cascaded output impedance of TLT is ~50 Ω. Along with multi-channel formation over the complete length of electrode rails, significant reduction in jitter (≤2 ns) and conduction delay (≤60 ns) has been observed by the realization of large amplitude (~80 kV), high dV/dt (~6 kV/ns) pulse produced by the indigenously developed TLT based trigger generator. The superior performance of TLT over conventional pulse transformer for railgap triggering application has been compared and demonstrated experimentally.

  7. CMS High Level Trigger Timing Measurements

    International Nuclear Information System (INIS)

    Richardson, Clint

    2015-01-01

    The two-level trigger system employed by CMS consists of the Level 1 (L1) Trigger, which is implemented using custom-built electronics, and the High Level Trigger (HLT), a farm of commercial CPUs running a streamlined version of the offline CMS reconstruction software. The operational L1 output rate of 100 kHz, together with the number of CPUs in the HLT farm, imposes a fundamental constraint on the amount of time available for the HLT to process events. Exceeding this limit impacts the experiment's ability to collect data efficiently. Hence, there is a critical need to characterize the performance of the HLT farm as well as the algorithms run prior to start up in order to ensure optimal data taking. Additional complications arise from the fact that the HLT farm consists of multiple generations of hardware and there can be subtleties in machine performance. We present our methods of measuring the timing performance of the CMS HLT, including the challenges of making such measurements. Results for the performance of various Intel Xeon architectures from 2009-2014 and different data taking scenarios are also presented. (paper)

  8. Inter-Rater Reliability of Provider Interpretations of Irritable Bowel Syndrome Food and Symptom Journals.

    Science.gov (United States)

    Zia, Jasmine; Chung, Chia-Fang; Xu, Kaiyuan; Dong, Yi; Schenk, Jeanette M; Cain, Kevin; Munson, Sean; Heitkemper, Margaret M

    2017-11-04

    There are currently no standardized methods for identifying trigger food(s) from irritable bowel syndrome (IBS) food and symptom journals. The primary aim of this study was to assess the inter-rater reliability of providers' interpretations of IBS journals. A second aim was to describe whether these interpretations varied for each patient. Eight providers reviewed 17 IBS journals and rated how likely key food groups (fermentable oligo-di-monosaccharides and polyols, high-calorie, gluten, caffeine, high-fiber) were to trigger IBS symptoms for each patient. Agreement of trigger food ratings was calculated using Krippendorff's α-reliability estimate. Providers were also asked to write down recommendations they would give to each patient. Estimates of agreement of trigger food likelihood ratings were poor (average α = 0.07). Most providers gave similar trigger food likelihood ratings for over half the food groups. Four providers gave the exact same written recommendation(s) (range 3-7) to over half the patients. Inter-rater reliability of provider interpretations of IBS food and symptom journals was poor. Providers favored certain trigger food likelihood ratings and written recommendations. This supports the need for a more standardized method for interpreting these journals and/or more rigorous techniques to accurately identify personalized IBS food triggers.

  9. Inter-Rater Reliability of Provider Interpretations of Irritable Bowel Syndrome Food and Symptom Journals

    Directory of Open Access Journals (Sweden)

    Jasmine Zia

    2017-11-01

    Full Text Available There are currently no standardized methods for identifying trigger food(s from irritable bowel syndrome (IBS food and symptom journals. The primary aim of this study was to assess the inter-rater reliability of providers’ interpretations of IBS journals. A second aim was to describe whether these interpretations varied for each patient. Eight providers reviewed 17 IBS journals and rated how likely key food groups (fermentable oligo-di-monosaccharides and polyols, high-calorie, gluten, caffeine, high-fiber were to trigger IBS symptoms for each patient. Agreement of trigger food ratings was calculated using Krippendorff’s α-reliability estimate. Providers were also asked to write down recommendations they would give to each patient. Estimates of agreement of trigger food likelihood ratings were poor (average α = 0.07. Most providers gave similar trigger food likelihood ratings for over half the food groups. Four providers gave the exact same written recommendation(s (range 3–7 to over half the patients. Inter-rater reliability of provider interpretations of IBS food and symptom journals was poor. Providers favored certain trigger food likelihood ratings and written recommendations. This supports the need for a more standardized method for interpreting these journals and/or more rigorous techniques to accurately identify personalized IBS food triggers.

  10. Diagnostic Systems and Resources utilization of the ATLAS High Level Trigger

    CERN Document Server

    Sidoti, A; The ATLAS collaboration; Ospanov, R

    2010-01-01

    Since the LHC started colliding protons in December 2009, the ATLAS trigger has operated very successfully with a collision rate which has increased by several orders of magnitude. The trigger monitoring and data quality infrastructure was essential to this success. We describe the software tools used to monitor the trigger system performance and assess the overall quality of the trigger selection during collisions running. ATLAS has broad physics goals which require a large number of different active triggers due to complex event topology, requiring quite sophisticated software structures and concepts. The trigger of the ATLAS experiment is built as a three level system. The first level is realized in hardware while the high level triggers (HLT) are software based and run on large PC farms. The trigger reduces the bunch crossing rate of 40 MHz, at design, to an average event rate of about 200 Hz for storage. Since the ATLAS detector is a general purpose detector, the trigger must be sensitive to a large numb...

  11. A track reconstructing low-latency trigger processor for high-energy physics

    International Nuclear Information System (INIS)

    Cuveland, Jan de

    2009-01-01

    The detection and analysis of the large number of particles emerging from high-energy collisions between atomic nuclei is a major challenge in experimental heavy-ion physics. Efficient trigger systems help to focus the analysis on relevant events. A primary objective of the Transition Radiation Detector of the ALICE experiment at the LHC is to trigger on high-momentum electrons. In this thesis, a trigger processor is presented that employs massive parallelism to perform the required online event reconstruction within 2 μs to contribute to the Level-1 trigger decision. Its three-stage hierarchical architecture comprises 109 nodes based on FPGA technology. Ninety processing nodes receive data from the detector front-end at an aggregate net bandwidth of 2.16 Tbit/s via 1080 optical links. Using specifically developed components and interconnections, the system combines high bandwidth with minimum latency. The employed tracking algorithm three-dimensionally reassembles the track segments found in the detector's drift chambers based on explicit value comparisons, calculates the momentum of the originating particles from the course of the reconstructed tracks, and finally leads to a trigger decision. The architecture is capable of processing up to 20 000 track segments in less than 2 μs with high detection efficiency and reconstruction precision for high-momentum particles. As a result, this thesis shows how a trigger processor performing complex online track reconstruction within tight real-time requirements can be realized. The presented hardware has been built and is in continuous data taking operation in the ALICE experiment. (orig.)

  12. A track reconstructing low-latency trigger processor for high-energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Cuveland, Jan de

    2009-09-17

    The detection and analysis of the large number of particles emerging from high-energy collisions between atomic nuclei is a major challenge in experimental heavy-ion physics. Efficient trigger systems help to focus the analysis on relevant events. A primary objective of the Transition Radiation Detector of the ALICE experiment at the LHC is to trigger on high-momentum electrons. In this thesis, a trigger processor is presented that employs massive parallelism to perform the required online event reconstruction within 2 {mu}s to contribute to the Level-1 trigger decision. Its three-stage hierarchical architecture comprises 109 nodes based on FPGA technology. Ninety processing nodes receive data from the detector front-end at an aggregate net bandwidth of 2.16 Tbit/s via 1080 optical links. Using specifically developed components and interconnections, the system combines high bandwidth with minimum latency. The employed tracking algorithm three-dimensionally reassembles the track segments found in the detector's drift chambers based on explicit value comparisons, calculates the momentum of the originating particles from the course of the reconstructed tracks, and finally leads to a trigger decision. The architecture is capable of processing up to 20 000 track segments in less than 2 {mu}s with high detection efficiency and reconstruction precision for high-momentum particles. As a result, this thesis shows how a trigger processor performing complex online track reconstruction within tight real-time requirements can be realized. The presented hardware has been built and is in continuous data taking operation in the ALICE experiment. (orig.)

  13. High-level trigger system for the LHC ALICE experiment

    CERN Document Server

    Bramm, R; Lien, J A; Lindenstruth, V; Loizides, C; Röhrich, D; Skaali, B; Steinbeck, T M; Stock, Reinhard; Ullaland, K; Vestbø, A S; Wiebalck, A

    2003-01-01

    The central detectors of the ALICE experiment at LHC will produce a data size of up to 75 MB/event at an event rate less than approximately equals 200 Hz resulting in a data rate of similar to 15 GB/s. Online processing of the data is necessary in order to select interesting (sub)events ("High Level Trigger"), or to compress data efficiently by modeling techniques. Processing this data requires a massive parallel computing system (High Level Trigger System). The system will consist of a farm of clustered SMP-nodes based on off- the-shelf PCs connected with a high bandwidth low latency network.

  14. Highly Efficient Moisture-Triggered Nanogenerator Based on Graphene Quantum Dots.

    Science.gov (United States)

    Huang, Yaxin; Cheng, Huhu; Shi, Gaoquan; Qu, Liangti

    2017-11-08

    A high-performance moisture triggered nanogenerator is fabricated by using graphene quantum dots (GQDs) as the active material. GQDs are prepared by direct oxidation and etching of natural graphite powder, which have small sizes of 2-5 nm and abundant oxygen-containing functional groups. After the treatment by electrochemical polarization, the GQDs-based moisture triggered nanogenerator can deliver a high voltage up to 0.27 V under 70% relative humidity variation, and a power density of 1.86 mW cm -2 with an optimized load resistor. The latter value is much higher than the moisture-electric power generators reported previously. The GQD moisture triggered nanogenerator is promising for self-power electronics and miniature sensors.

  15. Tracking and flavour tagging selection in the ATLAS High Level Trigger

    CERN Document Server

    Calvetti, Milene; The ATLAS collaboration

    2017-01-01

    In high-energy physics experiments, track based selection in the online environment is crucial for the detection of physics processes of interest for further study. This is of particular importance at the Large Hadron Collider (LHC), where the increasingly harsh collision environment is challenging participating experiments to improve the performance of their online selection. Principle among these challenges is the increasing number of interactions per bunch crossing, known as pileup. In the ATLAS experiment the challenge has been addressed with multiple strategies. Firstly, individual trigger groups focusing on specific physics objects have implemented novel algorithms which make use of the detailed tracking and vertexing performed within the trigger to improve rejection without losing efficiency. Secondly, since 2015 all trigger areas have also benefited from a new high performance inner detector software tracking system implemented in the High Level Trigger. Finally, performance will be further enhanced i...

  16. ATLAS Trigger and Data Acquisition Upgrades for High Luminosity LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00439268; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment at CERN is planning a second phase of upgrades to prepare for the "High Luminosity LHC", a 4th major run due to start in 2026. In order to deliver an order of magnitude more data than previous runs, 14 TeV protons will collide with an instantaneous luminosity of 7.5 × 1034 cm−2s−1, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this extreme scenario is essential to realise the physics programme, it is a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. Initial upgrade designs for the trigger and data acquisition system are shown, including the real time low latency hardware trigger, hardware-based tracking, the high throughput data acquisition system and the commodity hardware and software-based data handling and event filtering. The motivation, overall architecture and expected ...

  17. ATLAS Trigger and Data Acquisition Upgrades for High Luminosity LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00421104; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment at CERN is planning a second phase of upgrades to prepare for the "High Luminosity LHC", a 4th major run due to start in 2026. In order to deliver an order of magnitude more data than previous runs, 14 TeV protons will collide with an instantaneous luminosity of $7.5 \\times 10^{34} cm^{-2}s^{-1}$, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this extreme scenario is essential to realise the physics programme, it is a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. Initial upgrade designs for the trigger and data acquisition system are shown, including the real time low latency hardware trigger, hardware-based tracking, the high throughput data acquisition system and the commodity hardware and software-based data handling and event filtering. The motivation, overall architecture an...

  18. ATLAS Trigger and Data Acquisition Upgrades for High Luminosity LHC

    CERN Document Server

    George, Simon; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment at CERN is planning a second phase of upgrades to prepare for the "High Luminosity LHC", a 4th major run due to start in 2026. In order to deliver an order of magnitude more data than previous runs, 14 TeV protons will collide with an instantaneous luminosity of 7.5 × 10^{34} cm^{−2}s^{−1}, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this extreme scenario is essential to realise the physics programme, it is a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. Initial upgrade designs for the trigger and data acquisition system are shown, including the real time low latency hardware trigger, hardware-based tracking, the high throughput data acquisition system and the commodity hardware and software-based data handling and event filtering. The motivation, overall architecture and ...

  19. ATLAS Trigger and Data Acquisition Upgrades for High Luminosity LHC

    CERN Document Server

    Balunas, William Keaton; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment at CERN is planning a second phase of upgrades to prepare for the "High Luminosity LHC", a 4th major run due to start in 2026. In order to deliver an order of magnitude more data than previous runs, 14 TeV protons will collide with an instantaneous luminosity of $7.5 × 10^{34}$ cm$^{−2}$s$^{−1}$, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this extreme scenario is essential to realise the physics programme, it is a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. Initial upgrade designs for the trigger and data acquisition system are shown, including the real time low latency hardware trigger, hardware-based tracking, the high throughput data acquisition system and the commodity hardware and software-based data handling and event filtering. The motivation, overall architectur...

  20. A Track Reconstructing Low-latency Trigger Processor for High-energy Physics

    CERN Document Server

    AUTHOR|(CDS)2067518

    2009-01-01

    The detection and analysis of the large number of particles emerging from high-energy collisions between atomic nuclei is a major challenge in experimental heavy-ion physics. Efficient trigger systems help to focus the analysis on relevant events. A primary objective of the Transition Radiation Detector of the ALICE experiment at the LHC is to trigger on high-momentum electrons. In this thesis, a trigger processor is presented that employs massive parallelism to perform the required online event reconstruction within 2 µs to contribute to the Level-1 trigger decision. Its three-stage hierarchical architecture comprises 109 nodes based on FPGA technology. Ninety processing nodes receive data from the detector front-end at an aggregate net bandwidth of 2.16 Tbps via 1080 optical links. Using specifically developed components and interconnections, the system combines high bandwidth with minimum latency. The employed tracking algorithm three-dimensionally reassembles the track segments found in the detector's dr...

  1. A Novel in situ Trigger Combination Method

    International Nuclear Information System (INIS)

    Buzatu, Adrian; Warburton, Andreas; Krumnack, Nils; Yao, Wei-Ming

    2012-01-01

    Searches for rare physics processes using particle detectors in high-luminosity colliding hadronic beam environments require the use of multi-level trigger systems to reject colossal background rates in real time. In analyses like the search for the Higgs boson, there is a need to maximize the signal acceptance by combining multiple different trigger chains when forming the offline data sample. In such statistically limited searches, datasets are often amassed over periods of several years, during which the trigger characteristics evolve and their performance can vary significantly. Reliable production cross-section measurements and upper limits must take into account a detailed understanding of the effective trigger inefficiency for every selected candidate event. We present as an example the complex situation of three trigger chains, based on missing energy and jet energy, to be combined in the context of the search for the Higgs (H) boson produced in association with a W boson at the Collider Detector at Fermilab (CDF). We briefly review the existing techniques for combining triggers, namely the inclusion, division, and exclusion methods. We introduce and describe a novel fourth in situ method whereby, for each candidate event, only the trigger chain with the highest a priori probability of selecting the event is considered. The in situ combination method has advantages of scalability to large numbers of differing trigger chains and of insensitivity to correlations between triggers. We compare the inclusion and in situ methods for signal event yields in the CDF WH search.

  2. A high-speed DAQ framework for future high-level trigger and event building clusters

    International Nuclear Information System (INIS)

    Caselle, M.; Perez, L.E. Ardila; Balzer, M.; Dritschler, T.; Kopmann, A.; Mohr, H.; Rota, L.; Vogelgesang, M.; Weber, M.

    2017-01-01

    Modern data acquisition and trigger systems require a throughput of several GB/s and latencies of the order of microseconds. To satisfy such requirements, a heterogeneous readout system based on FPGA readout cards and GPU-based computing nodes coupled by InfiniBand has been developed. The incoming data from the back-end electronics is delivered directly into the internal memory of GPUs through a dedicated peer-to-peer PCIe communication. High performance DMA engines have been developed for direct communication between FPGAs and GPUs using 'DirectGMA (AMD)' and 'GPUDirect (NVIDIA)' technologies. The proposed infrastructure is a candidate for future generations of event building clusters, high-level trigger filter farms and low-level trigger system. In this paper the heterogeneous FPGA-GPU architecture will be presented and its performance be discussed.

  3. Multi-Threaded Algorithms for GPGPU in the ATLAS High Level Trigger

    Science.gov (United States)

    Conde Muíño, P.; ATLAS Collaboration

    2017-10-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant challenge that will increase significantly with future LHC upgrades. During the LHC data taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further to 7.5 times the design value in 2026 following LHC and ATLAS upgrades. Corresponding improvements in the speed of the reconstruction code will be needed to provide the required trigger selection power within affordable computing resources. Key factors determining the potential benefit of including GPGPU as part of the HLT processor farm are: the relative speed of the CPU and GPGPU algorithm implementations; the relative execution times of the GPGPU algorithms and serial code remaining on the CPU; the number of GPGPU required, and the relative financial cost of the selected GPGPU. We give a brief overview of the algorithms implemented and present new measurements that compare the performance of various configurations exploiting GPGPU cards.

  4. Optically triggered high voltage switch network and method for switching a high voltage

    Science.gov (United States)

    El-Sharkawi, Mohamed A.; Andexler, George; Silberkleit, Lee I.

    1993-01-19

    An optically triggered solid state switch and method for switching a high voltage electrical current. A plurality of solid state switches (350) are connected in series for controlling electrical current flow between a compensation capacitor (112) and ground in a reactive power compensator (50, 50') that monitors the voltage and current flowing through each of three distribution lines (52a, 52b and 52c), which are supplying three-phase power to one or more inductive loads. An optical transmitter (100) controlled by the reactive power compensation system produces light pulses that are conveyed over optical fibers (102) to a switch driver (110') that includes a plurality of series connected optical triger circuits (288). Each of the optical trigger circuits controls a pair of the solid state switches and includes a plurality of series connected resistors (294, 326, 330, and 334) that equalize or balance the potential across the plurality of trigger circuits. The trigger circuits are connected to one of the distribution lines through a trigger capacitor (340). In each switch driver, the light signals activate a phototransistor (300) so that an electrical current flows from one of the energy reservoir capacitors through a pulse transformer (306) in the trigger circuit, producing gate signals that turn on the pair of serially connected solid state switches (350).

  5. Optically triggered high voltage switch network and method for switching a high voltage

    Energy Technology Data Exchange (ETDEWEB)

    El-Sharkawi, Mohamed A. (Renton, WA); Andexler, George (Everett, WA); Silberkleit, Lee I. (Mountlake Terrace, WA)

    1993-01-19

    An optically triggered solid state switch and method for switching a high voltage electrical current. A plurality of solid state switches (350) are connected in series for controlling electrical current flow between a compensation capacitor (112) and ground in a reactive power compensator (50, 50') that monitors the voltage and current flowing through each of three distribution lines (52a, 52b and 52c), which are supplying three-phase power to one or more inductive loads. An optical transmitter (100) controlled by the reactive power compensation system produces light pulses that are conveyed over optical fibers (102) to a switch driver (110') that includes a plurality of series connected optical triger circuits (288). Each of the optical trigger circuits controls a pair of the solid state switches and includes a plurality of series connected resistors (294, 326, 330, and 334) that equalize or balance the potential across the plurality of trigger circuits. The trigger circuits are connected to one of the distribution lines through a trigger capacitor (340). In each switch driver, the light signals activate a phototransistor (300) so that an electrical current flows from one of the energy reservoir capacitors through a pulse transformer (306) in the trigger circuit, producing gate signals that turn on the pair of serially connected solid state switches (350).

  6. The trigger supervisor: Managing triggering conditions in a high energy physics experiment

    International Nuclear Information System (INIS)

    Wadsworth, B.; Lanza, R.; LeVine, M.J.; Scheetz, R.A.; Videbaek, F.

    1987-01-01

    A trigger supervisor, implemented in VME-bus hardware, is described, which enables the host computer to dynamically control and monitor the trigger configuration for acquiring data from multiple detector partitions in a complex experiment

  7. Study for a failsafe trigger generation system for the Large Hadron Collider beam dump kicker magnets

    CERN Document Server

    Rampl, M

    1999-01-01

    The 27 km-particle accelerator Large Hadron Collider (LHC), which will be completed at the European Laboratory for Particle Physics (CERN) in 2005, will work with extremely high beam energies (~334 MJ per beam). Since the equipment and in particular the superconducting magnets must be protected from damage caused by these high energy beams the beam dump must be able to absorb this energy very reliable at every stage of operation. The kicker magnets that extract the particles from the accelerator are synchronised with the beam by the trigger generation system. This thesis is a first study for this electronic module and its functions. A special synchronisation circuit and a very reliable electronic switch were developed. Most functions were implemented in a Gate-Array to improve the reliability and to facilitate modifications during the test stage. This study also comprises the complete concept for the prototype of the trigger generation system. During all project stages reliability was always the main determin...

  8. Topological trigger device using scintillating fibers and position-sensitive photomultipliers

    Energy Technology Data Exchange (ETDEWEB)

    Kuroda, Keiichi; Dufournaud, J; Sillou, D [Laboratoire d' Annecy-le-Vieux de Physique des Particules (LAPP), 74 (France); Agoritsas, V [European Organization for Nuclear Research, Geneva (Switzerland); Bystricky, G; Lehar, F; Lesquen, A de [CEN-Saclay, 91 - Gif-sur-Yvette (France); Giacomich, R; Pauletta, G; Penzo, A; Salvato, G; Schiavon, P; Villari, A [INFN, Messina (Italy) INFN, Trieste (Italy) INFN, Udine (Italy); Gorin, A M; Meschanin, A P; Nurushev, S B; Rakhmatov, V E; Rykalin, V L; Solovyanov, V L; Vasiliev, A N; Vasil' chencko, V G [Institute for High Energy Physics, Serpukhov (USSR); Oshima, N; Yamada, R [Fermi National Accelerator Lab., Batavia, IL (USA); Takeutchi, F [Kyoto-Sanyo Univ., Kyoto (Japan); Yoshida, T [Osaka City Univ. (Japan); Akchurin, N; Onel, Y; Newsom, C

    1991-07-01

    An approach to a high quality of the Level-1 Trigger is investigated on the basis of a topological trigger device. It will be realized by using scintillating fibers and position-sensitive photomultipliers, both considered as potential candidates of new detector-components thanks to their excellent time characteristics and high radiation resistances. The device is characterized in particular by its simple concept and reliable operation supported by the mature technologies emploied. The major interests of such a scheme under LHC environments reside in its capability of selcting high pperpendicular to tracks in real time, its optional immunity against low pperpendicular to tracks and loopers, as well as its effective links to other associated devices in the complex of a vertex detector. (orig.).

  9. Tracking and flavour tagging selection in the ATLAS High Level Trigger

    CERN Document Server

    Calvetti, Milene; The ATLAS collaboration

    2017-01-01

    In high-energy physics experiments, track based selection in the online environment is crucial for the efficient real time selection of the rare physics process of interest. This is of particular importance at the Large Hadron Collider (LHC), where the increasingly harsh collision environment is challenging the experiments to improve the performance of their online selection. Principal among these challenges is the increasing number of interactions per bunch crossing, known as pileup. In the ATLAS experiment the challenge has been addressed with multiple strategies. Firstly, specific trigger objects have been improved by building algorithms using detailed tracking and vertexing in specific detector regions to improve background rejection without loosing signal efficiency. Secondly, since 2015 all trigger areas have benefited from a new high performance Inner Detector (ID) software tracking system implemented in the High Level Trigger. Finally, performance will be further enhanced in future by the installation...

  10. Exact reliability quantification of highly reliable systems with maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Bris, Radim, E-mail: radim.bris@vsb.c [VSB-Technical University Ostrava, Faculty of Electrical Engineering and Computer Science, Department of Applied Mathematics, 17. listopadu 15, 70833 Ostrava-Poruba (Czech Republic)

    2010-12-15

    When a system is composed of highly reliable elements, exact reliability quantification may be problematic, because computer accuracy is limited. Inaccuracy can be due to different aspects. For example, an error may be made when subtracting two numbers that are very close to each other, or at the process of summation of many very different numbers, etc. The basic objective of this paper is to find a procedure, which eliminates errors made by PC when calculations close to an error limit are executed. Highly reliable system is represented by the use of directed acyclic graph which is composed from terminal nodes, i.e. highly reliable input elements, internal nodes representing subsystems and edges that bind all of these nodes. Three admissible unavailability models of terminal nodes are introduced, including both corrective and preventive maintenance. The algorithm for exact unavailability calculation of terminal nodes is based on merits of a high-performance language for technical computing MATLAB. System unavailability quantification procedure applied to a graph structure, which considers both independent and dependent (i.e. repeatedly occurring) terminal nodes is based on combinatorial principle. This principle requires summation of a lot of very different non-negative numbers, which may be a source of an inaccuracy. That is why another algorithm for exact summation of such numbers is designed in the paper. The summation procedure uses benefits from a special number system with the base represented by the value 2{sup 32}. Computational efficiency of the new computing methodology is compared with advanced simulation software. Various calculations on systems from references are performed to emphasize merits of the methodology.

  11. Quark fragmentation and trigger side momentum distributions in high-Psub(T) processes

    International Nuclear Information System (INIS)

    Antolin, J.; Azcoiti, V.; Bravo, J.R.; Alonso, J.L.; Cruz, A.; Ringland, G.A.

    1979-11-01

    It has been widely argued that the experimental evidence concerning the momentum accompanying high Psub(T) triggers is a grave problem for models which take the trigger hadron to be a quark fragment. It is claimed that the trigger hadron takes much too large a fraction (zsub(c)) of the jet momentum for the trigger side jet to be a quark. The jet momentum is not directly measured, but deduced from the derivative of the momentum (psub(x)) accompanying the trigger with respect to the trigger transverse momentum - psub(T)sup(t). This argument is shown to be unsafe. Using both an approximate analytic approach to illustrate the physics and subsequently a full numerical computation it is proved that the deduction of the fractional momentum accompanying the trigger, 1/zsub(c) -1, from dpsub(x)/dpsub(T)sup(t) is not correct. Further it is shown that models which do take the trigger to be a quark fragment are essentially in agreement with the data on trigger side momentum distributions. A surprising prediction of the present analysis is that psub(x) should be approximately constant for psub(T)sup(t) >= 6 GeV/c. (author)

  12. TRIGGER

    CERN Multimedia

    Roberta Arcidiacono

    2013-01-01

    Trigger Studies Group (TSG) The Trigger Studies Group has just concluded its third 2013 workshop, where all POGs presented the improvements to the physics object reconstruction, and all PAGs have shown their plans for Trigger development aimed at the 2015 High Level Trigger (HLT) menu. The Strategy for Trigger Evolution And Monitoring (STEAM) group is responsible for Trigger menu development, path timing, Trigger performance studies coordination, HLT offline DQM as well as HLT release, menu and conditions validation – this last task in collaboration with PdmV (Physics Data and Monte Carlo Validation group). In the last months the group has delivered several HLT rate estimates and comparisons, using the available data and Monte Carlo samples. The studies were presented at the Trigger workshops in September and December, and STEAM has contacted POGs and PAGs to understand the origin of the discrepancies observed between 8 TeV data and Monte Carlo simulations. The most recent results show what the...

  13. A video event trigger for high frame rate, high resolution video technology

    Science.gov (United States)

    Williams, Glenn L.

    1991-12-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  14. The development of high-voltage repetitive low-jitter corona stabilized triggered switch

    Science.gov (United States)

    Geng, Jiuyuan; Yang, Jianhua; Cheng, Xinbing; Yang, Xiao; Chen, Rong

    2018-04-01

    The high-power switch plays an important part in a pulse power system. With the trend of pulse power technology toward modularization, miniaturization, and accuracy control, higher requirements on electrical trigger and jitter of the switch have been put forward. A high-power low-jitter corona-stabilized triggered switch (CSTS) is designed in this paper. This kind of CSTS is based on corona stabilized mechanism, and it can be used as a main switch of an intense electron-beam accelerator (IEBA). Its main feature was the use of an annular trigger electrode instead of a traditional needle-like trigger electrode, taking main and side trigger rings to fix the discharging channels and using SF6/N2 gas mixture as its operation gas. In this paper, the strength of the local field enhancement was changed by a trigger electrode protrusion length Dp. The differences of self-breakdown voltage and its stability, delay time jitter, trigger requirements, and operation range of the switch were compared. Then the effect of different SF6/N2 mixture ratio on switch performance was explored. The experimental results show that when the SF6 is 15% with the pressure of 0.2 MPa, the hold-off voltage of the switch is 551 kV, the operating range is 46.4%-93.5% of the self-breakdown voltage, the jitter is 0.57 ns, and the minimum trigger voltage requirement is 55.8% of the peak. At present, the CSTS has been successfully applied to an IEBA for long time operation.

  15. TRIGGER

    CERN Multimedia

    by Wesley Smith

    2010-01-01

    Level-1 Trigger Hardware and Software The overall status of the L1 trigger has been excellent and the running efficiency has been high during physics fills. The timing is good to about 1%. The fine-tuning of the time synchronization of muon triggers is ongoing and will be completed after more than 10 nb-1 of data have been recorded. The CSC trigger primitive and RPC trigger timing have been refined. A new configuration for the CSC Track Finder featured modified beam halo cuts and improved ghost cancellation logic. More direct control was provided for the DT opto-receivers. New RPC Cosmic Trigger (RBC/TTU) trigger algorithms were enabled for collision runs. There is further work planned during the next technical stop to investigate a few of the links from the ECAL to the Regional Calorimeter Trigger (RCT). New firmware and a new configuration to handle trigger rate spikes in the ECAL barrel are also being tested. A board newly developed by the tracker group (ReTRI) has been installed and activated to block re...

  16. New high-energy phenomena in aircraft triggered lightning

    NARCIS (Netherlands)

    van Deursen, A.P.J.; Kochkin, P.; de Boer, A.; Bardet, M.; Boissin, J.F.

    2016-01-01

    High-energy phenomena associated with lighting have been proposed in the twenties, observed for the first time in the sixties, and further investigated more recently by e.g. rocket triggered lightning. Similarly, x-rays have been detected in meter-long discharges in air at standard atmospheric

  17. Frameworks to monitor and predict resource usage in the ATLAS High Level Trigger

    CERN Document Server

    Martin, Tim; The ATLAS collaboration

    2016-01-01

    The ATLAS High Level Trigger Farm consists of around 30,000 CPU cores which filter events at up to 100 kHz input rate. A costing framework is built into the high level trigger, this enables detailed monitoring of the system and allows for data-driven predictions to be made utilising specialist datasets. This talk will present an overview of how ATLAS collects in-situ monitoring data on both CPU usage and dataflow over the data-acquisition network during the trigger execution, and how these data are processed to yield both low level monitoring of individual selection-algorithms and high level data on the overall performance of the farm. For development and prediction purposes, ATLAS uses a special `Enhanced Bias' event selection. This mechanism will be explained along with how is used to profile expected resource usage and output event-rate of new physics selections, before they are executed on the actual high level trigger farm.

  18. High-Reliability Health Care: Getting There from Here

    Science.gov (United States)

    Chassin, Mark R; Loeb, Jerod M

    2013-01-01

    Context Despite serious and widespread efforts to improve the quality of health care, many patients still suffer preventable harm every day. Hospitals find improvement difficult to sustain, and they suffer “project fatigue” because so many problems need attention. No hospitals or health systems have achieved consistent excellence throughout their institutions. High-reliability science is the study of organizations in industries like commercial aviation and nuclear power that operate under hazardous conditions while maintaining safety levels that are far better than those of health care. Adapting and applying the lessons of this science to health care offer the promise of enabling hospitals to reach levels of quality and safety that are comparable to those of the best high-reliability organizations. Methods We combined the Joint Commission's knowledge of health care organizations with knowledge from the published literature and from experts in high-reliability industries and leading safety scholars outside health care. We developed a conceptual and practical framework for assessing hospitals’ readiness for and progress toward high reliability. By iterative testing with hospital leaders, we refined the framework and, for each of its fourteen components, defined stages of maturity through which we believe hospitals must pass to reach high reliability. Findings We discovered that the ways that high-reliability organizations generate and maintain high levels of safety cannot be directly applied to today's hospitals. We defined a series of incremental changes that hospitals should undertake to progress toward high reliability. These changes involve the leadership's commitment to achieving zero patient harm, a fully functional culture of safety throughout the organization, and the widespread deployment of highly effective process improvement tools. Conclusions Hospitals can make substantial progress toward high reliability by undertaking several specific

  19. An Overview of the ATLAS High Level Trigger Dataflow and Supervision

    CERN Document Server

    Wheeler, S; Baines, J T M; Bee, C P; Biglietti, M; Bogaerts, A; Boisvert, V; Bosman, M; Brandt, S; Caron, B; Casado, M P; Cataldi, G; Cavalli, D; Cervetto, M; Comune, G; Corso-Radu, A; Di Mattia, A; Díaz-Gómez, M; Dos Anjos, A; Drohan, J; Ellis, Nick; Elsing, M; Epp, B; Etienne, F; Falciano, S; Farilla, A; George, S; Ghete, V M; González, S; Grothe, M; Kaczmarska, A; Karr, K M; Khomich, A; Konstantinidis, N P; Krasny, W; Li, W; Lowe, A; Luminari, L; Meessen, C; Mello, A G; Merino, G; Morettini, P; Moyse, E; Nairz, A; Negri, A; Nikitin, N V; Nisati, A; Padilla, C; Parodi, F; Pérez-Réale, V; Pinfold, J L; Pinto, P; Polesello, G; Qian, Z; Resconi, S; Rosati, S; Scannicchio, D A; Schiavi, C; Schörner-Sadenius, T; Segura, E; De Seixas, J M; Shears, T G; Sivoklokov, S Yu; Smizanska, M; Soluk, R A; Stanescu, C; Tapprogge, Stefan; Touchard, F; Vercesi, V; Watson, A; Wengler, T; Werner, P; Wickens, F J; Wiedenmann, W; Wielers, M; Zobernig, G; RT 2003 13th IEEE-NPSS Real Time Conference

    2004-01-01

    The ATLAS High Level Trigger (HLT) system provides software-based event selection after the initial LVL1 hardware trigger. It is composed of two stages, the LVL2 trigger and the Event Filter (EF). The LVL2 trigger performs event selection with optimized algorithms using selected data guided by Region of Interest pointers provided by the LVL1 trigger. Those events selected by LVL2, are built into complete events, which are passed to the EF for a further stage of event selection and classification using off-line algorithms. Events surviving the EF selection are passed for off-line storage. The two stages of HLT are implemented on processor farms. The concept of distributing the selection process between LVL2 and EF is a key element in the architecture, which allows it to be flexible to changes (luminosity, detector knowledge, background conditions etc.) Although there are some differences in the requirements between these sub-systems there are many commonalities. An overview of the dataflow (event selection) an...

  20. Flexible trigger menu implementation on the Global Trigger for the CMS Level-1 trigger upgrade

    Science.gov (United States)

    MATSUSHITA, Takashi; CMS Collaboration

    2017-10-01

    The CMS experiment at the Large Hadron Collider (LHC) has continued to explore physics at the high-energy frontier in 2016. The integrated luminosity delivered by the LHC in 2016 was 41 fb-1 with a peak luminosity of 1.5 × 1034 cm-2s-1 and peak mean pile-up of about 50, all exceeding the initial estimations for 2016. The CMS experiment has upgraded its hardware-based Level-1 trigger system to maintain its performance for new physics searches and precision measurements at high luminosities. The Global Trigger is the final step of the CMS Level-1 trigger and implements a trigger menu, a set of selection requirements applied to the final list of objects from calorimeter and muon triggers, for reducing the 40 MHz collision rate to 100 kHz. The Global Trigger has been upgraded with state-of-the-art FPGA processors on Advanced Mezzanine Cards with optical links running at 10 GHz in a MicroTCA crate. The powerful processing resources of the upgraded system enable implementation of more algorithms at a time than previously possible, allowing CMS to be more flexible in how it handles the available trigger bandwidth. Algorithms for a trigger menu, including topological requirements on multi-objects, can be realised in the Global Trigger using the newly developed trigger menu specification grammar. Analysis-like trigger algorithms can be represented in an intuitive manner and the algorithms are translated to corresponding VHDL code blocks to build a firmware. The grammar can be extended in future as the needs arise. The experience of implementing trigger menus on the upgraded Global Trigger system will be presented.

  1. The ATLAS high level trigger region of interest builder

    International Nuclear Information System (INIS)

    Blair, R.; Dawson, J.; Drake, G.; Haberichter, W.; Schlereth, J.; Zhang, J.; Ermoline, Y.; Pope, B.; Aboline, M.; High Energy Physics; Michigan State Univ.

    2008-01-01

    This article describes the design, testing and production of the ATLAS Region of Interest Builder (RoIB). This device acts as an interface between the Level 1 trigger and the high level trigger (HLT) farm for the ATLAS LHC detector. It distributes all of the Level 1 data for a subset of events to a small number of (16 or less) individual commodity processors. These processors in turn provide this information to the HLT. This allows the HLT to use the Level 1 information to narrow data requests to areas of the detector where Level 1 has identified interesting objects

  2. High frame rate retrospectively triggered Cine MRI for assessment of murine diastolic function.

    Science.gov (United States)

    Coolen, Bram F; Abdurrachim, Desiree; Motaal, Abdallah G; Nicolay, Klaas; Prompers, Jeanine J; Strijkers, Gustav J

    2013-03-01

    To assess left ventricular (LV) diastolic function in mice with Cine MRI, a high frame rate (>60 frames per cardiac cycle) is required. For conventional electrocardiography-triggered Cine MRI, the frame rate is inversely proportional to the pulse repetition time (TR). However, TR cannot be lowered at will to increase the frame rate because of gradient hardware, spatial resolution, and signal-to-noise limitations. To overcome these limitations associated with electrocardiography-triggered Cine MRI, in this paper, we introduce a retrospectively triggered Cine MRI protocol capable of producing high-resolution high frame rate Cine MRI of the mouse heart for addressing left ventricular diastolic function. Simulations were performed to investigate the influence of MRI sequence parameters and the k-space filling trajectory in relation to the desired number of frames per cardiac cycle. An optimized protocol was applied in vivo and compared with electrocardiography-triggered Cine for which a high-frame rate could only be achieved by several interleaved acquisitions. Retrospective high frame rate Cine MRI proved superior to the interleaved electrocardiography-triggered protocols. High spatial-resolution Cine movies with frames rates up to 80 frames per cardiac cycle were obtained in 25 min. Analysis of left ventricular filling rate curves allowed accurate determination of early and late filling rates and revealed subtle impairments in left ventricular diastolic function of diabetic mice in comparison with nondiabetic mice. Copyright © 2012 Wiley Periodicals, Inc.

  3. Development of a highly reliable CRT processor

    International Nuclear Information System (INIS)

    Shimizu, Tomoya; Saiki, Akira; Hirai, Kenji; Jota, Masayoshi; Fujii, Mikiya

    1996-01-01

    Although CRT processors have been employed by the main control board to reduce the operator's workload during monitoring, the control systems are still operated by hardware switches. For further advancement, direct controller operation through a display device is expected. A CRT processor providing direct controller operation must be as reliable as the hardware switches are. The authors are developing a new type of highly reliable CRT processor that enables direct controller operations. In this paper, we discuss the design principles behind a highly reliable CRT processor. The principles are defined by studies of software reliability and of the functional reliability of the monitoring and operation systems. The functional configuration of an advanced CRT processor is also addressed. (author)

  4. Resource utilization by the ATLAS High Level Trigger during 2010 and 2011 LHC running

    CERN Document Server

    Ospanov, R

    2012-01-01

    In 2010 and 2011, the ATLAS experiment successfully recorded data from LHC collisions with high efficiency and excellent data quality. ATLAS employs a three-level trigger system to select events of interest for physics analyses and detector commissioning. The trigger system consists of a custom-designed hardware trigger at level-1 and software algorithms at the two higher levels. The trigger selection is defined by a trigger menu which consists of more than 300 individual trigger signatures, such as electrons, muons, particle jets, etc. An execution of a trigger signature incurs computing and data storage costs. Th composition of the deployed trigger menu depends on the instantaneous LHC luminosity, the experiment's goals for the recorded data, and the limits imposed by the available computing power, network bandwidth and storage space. This paper describes a trigger monitoring framework for assigning computing costs for individual trigger signatures and trigger menus as a whole. These costs can be extrapolat...

  5. High-reliability health care: getting there from here.

    Science.gov (United States)

    Chassin, Mark R; Loeb, Jerod M

    2013-09-01

    Despite serious and widespread efforts to improve the quality of health care, many patients still suffer preventable harm every day. Hospitals find improvement difficult to sustain, and they suffer "project fatigue" because so many problems need attention. No hospitals or health systems have achieved consistent excellence throughout their institutions. High-reliability science is the study of organizations in industries like commercial aviation and nuclear power that operate under hazardous conditions while maintaining safety levels that are far better than those of health care. Adapting and applying the lessons of this science to health care offer the promise of enabling hospitals to reach levels of quality and safety that are comparable to those of the best high-reliability organizations. We combined the Joint Commission's knowledge of health care organizations with knowledge from the published literature and from experts in high-reliability industries and leading safety scholars outside health care. We developed a conceptual and practical framework for assessing hospitals' readiness for and progress toward high reliability. By iterative testing with hospital leaders, we refined the framework and, for each of its fourteen components, defined stages of maturity through which we believe hospitals must pass to reach high reliability. We discovered that the ways that high-reliability organizations generate and maintain high levels of safety cannot be directly applied to today's hospitals. We defined a series of incremental changes that hospitals should undertake to progress toward high reliability. These changes involve the leadership's commitment to achieving zero patient harm, a fully functional culture of safety throughout the organization, and the widespread deployment of highly effective process improvement tools. Hospitals can make substantial progress toward high reliability by undertaking several specific organizational change initiatives. Further research

  6. Concepts and design of the CMS high granularity calorimeter Level-1 trigger

    CERN Document Server

    Sauvan, Jean-Baptiste

    2016-01-01

    The CMS experiment has chosen a novel high granularity calorimeter for the forward region as part of its planned upgrade for the high luminosity LHC. The calorimeter will have a fine segmentation in both the transverse and longitudinal directions and will be the first such calorimeter specifically optimised for particle flow reconstruction to operate at a colliding beam experiment. The high granularity results in around six million readout channels in total and so presents a significant challenge in terms of data manipulation and processing for the trigger; the trigger data volumes will be an order of magnitude above those currently handled at CMS. In addition, the high luminosity will result in an average of 140 to 200 interactions per bunch crossing, giving a huge background rate in the forward region that needs to be efficiently reduced by the trigger algorithms. Efficient data reduction and reconstruction algorithms making use of the fine segmentation of the detector have been simulated and evaluated. The...

  7. Using MaxCompiler for High Level Synthesis of Trigger Algorithms

    CERN Document Server

    Summers, Sioni Paris; Sanders, P.

    2017-01-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  8. Studies of ATM for ATLAS high-level triggers

    CERN Document Server

    Bystrický, J; Huet, M; Le Dû, P; Mandjavidze, I D

    2001-01-01

    This paper presents some of the conclusions of our studies on asynchronous transfer mode (ATM) and fast Ethernet in the ATLAS level-2 trigger pilot project. We describe the general concept and principles of our data-collection and event-building scheme that could be transposed to various experiments in high-energy and nuclear physics. To validate the approach in view of ATLAS high-level triggers, we assembled a testbed composed of up to 48 computers linked by a 7.5-Gbit/s ATM switch. This modular switch is used as a single entity or is split into several smaller interconnected switches. This allows study of how to construct a large network from smaller units. Alternatively, the ATM network can be replaced by fast Ethernet. We detail the operation of the system and present series of performance measurements made with event-building traffic pattern. We extrapolate these results to show how today's commercial networking components could be used to build a 1000-port network adequate for ATLAS needs. Lastly, we li...

  9. Analysis and realization of a high resolution trigger for DM2 experiment

    International Nuclear Information System (INIS)

    Bertrand, J.L.

    1984-01-01

    The construction of a high resolution trigger has been carried out from its theoretical design to building. The term trigger is applied to an almost real-time system for track filtering in particle detection. Curved tracks are detected (with a magnetic field) and the detector is of a revolution symmetry type. The concept of a ''hybrid'' trigger with features in between those of the so-called ''CELLO R0'' and ''MARK II'' types is launched. It allows a positive versatility for the optimization of the different features. Besides a specific structure, some hardware and software implements have been designed for development and tests. The ''TRIGGER LENT'' is presently in operation in the DM2 experiment [fr

  10. A Highly Selective First-Level Muon Trigger With MDT Chamber Data for ATLAS at HL-LHC

    CERN Document Server

    Nowak, Sebastian; The ATLAS collaboration

    2015-01-01

    Highly selective triggers are essential for the physics programme of the ATLAS experiment at HL-LHC where the instantaneous luminosity will be about an order of magnitude larger than the LHC design luminosity. The Level-1 muon trigger rate is dominated by low momentum muons below the nominal trigger threshold due to the limited momentum resolution of the Resistive Plate and Thin Gap trigger chambers. The resulting high trigger rates at HL-LHC can be sufficient reduced by using the data of the precision Muon Drift Tube chambers for the trigger decision. This requires the implementation of a fast MDT read-out chain and of a fast MDT track reconstruction algorithm with a latency of at most 6~$\\mu$s. A hardware demonstrator of the fast read-out chain has been successfully tested at the high HL-LHC background rates at the CERN Gamma Irradiation Facility. The fast track reconstruction algorithm has been implemented on a fas trigger processor.

  11. Multi­-Threaded Algorithms for General purpose Graphics Processor Units in the ATLAS High Level Trigger

    CERN Document Server

    Conde Mui\\~no, Patricia; The ATLAS collaboration

    2016-01-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with level 1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz level 1 acceptance rate to 1 kHz for recording, requiring an average per­-event processing time of ~250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant ...

  12. TRIGGER

    CERN Multimedia

    R. Carlin with contributions from D. Acosta

    2012-01-01

    Level-1 Trigger Data-taking continues at cruising speed, with high availability of all components of the Level-1 trigger. We have operated the trigger up to a luminosity of 7.6E33, where we approached 100 kHz using the 7E33 prescale column.  Recently, the pause without triggers in case of an automatic "RESYNC" signal (the "settle" and "recover" time) was reduced in order to minimise the overall dead-time. This may become very important when the LHC comes back with higher energy and luminosity after LS1. We are also preparing for data-taking in the proton-lead run in early 2013. The CASTOR detector will make its comeback into CMS and triggering capabilities are being prepared for this. Steps to be taken include improved cooperation with the TOTEM trigger system and using the LHC clock during the injection and ramp phases of LHC. Studies are being finalised that will have a bearing on the Trigger Technical Design Report (TDR), which is to be rea...

  13. Flexible trigger menu implementation on the Global Trigger for the CMS Level-1 trigger upgrade

    CERN Document Server

    Matsushita, Takashi

    2017-01-01

    The CMS experiment at the Large Hadron Collider (LHC) has continued to explore physics at the high-energy frontier in 2016. The integrated luminosity delivered by the LHC in 2016 was 41~fb$^{-1}$ with a peak luminosity of 1.5 $\\times$ 10$^{34}$ cm$^{-2}$s$^{-1}$ and peak mean pile-up of about 50, all exceeding the initial estimations for 2016. The CMS experiment has upgraded its hardware-based Level-1 trigger system to maintain its performance for new physics searches and precision measurements at high luminosities. The Global Trigger is the final step of the CMS \\mbox{Level-1} trigger and implements a trigger menu, a set of selection requirements applied to the final list of objects from calorimeter and muon triggers, for reducing the 40 MHz collision rate to 100 kHz. The Global Trigger has been upgraded with state-of-the-art FPGA processors on Advanced Mezzanine Cards with optical links running at 10 GHz in a MicroTCA crate. The powerful processing resources of the upgraded system enable implemen...

  14. Contribution to high voltage matrix switches reliability

    International Nuclear Information System (INIS)

    Lausenaz, Yvan

    2000-01-01

    Nowadays, power electronic equipment requirements are important, concerning performances, quality and reliability. On the other hand, costs have to be reduced in order to satisfy the market rules. To provide cheap, reliability and performances, many standard components with mass production are developed. But the construction of specific products must be considered following these two different points: in one band you can produce specific components, with delay, over-cost problems and eventuality quality and reliability problems, in the other and you can use standard components in a adapted topologies. The CEA of Pierrelatte has adopted this last technique of power electronic conception for the development of these high voltage pulsed power converters. The technique consists in using standard components and to associate them in series and in parallel. The matrix constitutes high voltage macro-switch where electrical parameters are distributed between the synchronized components. This study deals with the reliability of these structures. It brings up the high reliability aspect of MOSFETs matrix associations. Thanks to several homemade test facilities, we obtained lots of data concerning the components we use. The understanding of defects propagation mechanisms in matrix structures has allowed us to put forwards the necessity of robust drive system, adapted clamping voltage protection, and careful geometrical construction. All these reliability considerations in matrix associations have notably allowed the construction of a new matrix structure regrouping all solutions insuring reliability. Reliable and robust, this product has already reaches the industrial stage. (author) [fr

  15. Validation and Test-Retest Reliability of New Thermographic Technique Called Thermovision Technique of Dry Needling for Gluteus Minimus Trigger Points in Sciatica Subjects and TrPs-Negative Healthy Volunteers

    Science.gov (United States)

    Rychlik, Michał; Samborski, Włodzimierz

    2015-01-01

    The aim of this study was to assess the validity and test-retest reliability of Thermovision Technique of Dry Needling (TTDN) for the gluteus minimus muscle. TTDN is a new thermography approach used to support trigger points (TrPs) diagnostic criteria by presence of short-term vasomotor reactions occurring in the area where TrPs refer pain. Method. Thirty chronic sciatica patients (n=15 TrP-positive and n=15 TrPs-negative) and 15 healthy volunteers were evaluated by TTDN three times during two consecutive days based on TrPs of the gluteus minimus muscle confirmed additionally by referred pain presence. TTDN employs average temperature (T avr), maximum temperature (T max), low/high isothermal-area, and autonomic referred pain phenomenon (AURP) that reflects vasodilatation/vasoconstriction. Validity and test-retest reliability were assessed concurrently. Results. Two components of TTDN validity and reliability, T avr and AURP, had almost perfect agreement according to κ (e.g., thigh: 0.880 and 0.938; calf: 0.902 and 0.956, resp.). The sensitivity for T avr, T max, AURP, and high isothermal-area was 100% for everyone, but specificity of 100% was for T avr and AURP only. Conclusion. TTDN is a valid and reliable method for T avr and AURP measurement to support TrPs diagnostic criteria for the gluteus minimus muscle when digitally evoked referred pain pattern is present. PMID:26137486

  16. FPGA based compute nodes for high level triggering in PANDA

    International Nuclear Information System (INIS)

    Kuehn, W; Gilardi, C; Kirschner, D; Lang, J; Lange, S; Liu, M; Perez, T; Yang, S; Schmitt, L; Jin, D; Li, L; Liu, Z; Lu, Y; Wang, Q; Wei, S; Xu, H; Zhao, D; Korcyl, K; Otwinowski, J T; Salabura, P

    2008-01-01

    PANDA is a new universal detector for antiproton physics at the HESR facility at FAIR/GSI. The PANDA data acquisition system has to handle interaction rates of the order of 10 7 /s and data rates of several 100 Gb/s. FPGA based compute nodes with multi-Gb/s bandwidth capability using the ATCA architecture are designed to handle tasks such as event building, feature extraction and high level trigger processing. Data connectivity is provided via optical links as well as multiple Gb Ethernet ports. The boards will support trigger algorithms such us pattern recognition for RICH detectors, EM shower analysis, fast tracking algorithms and global event characterization. Besides VHDL, high level C-like hardware description languages will be considered to implement the firmware

  17. The Software Architecture of the LHCb High Level Trigger

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The LHCb experiment is a spectrometer dedicated to the study of heavy flavor at the LHC. The rate of proton-proton collisions at the LHC is 15 MHz, but disk space limitations mean that only 3 kHz can be written to tape for offline processing. For this reason the LHCb data acquisition system -- trigger -- plays a key role in selecting signal events and rejecting background. In contrast to previous experiments at hadron colliders like for example CDF or D0, the bulk of the LHCb trigger is implemented in software and deployed on a farm of 20k parallel processing nodes. This system, called the High Level Trigger (HLT) is responsible for reducing the rate from the maximum at which the detector can be read out, 1.1 MHz, to the 3 kHz which can be processed offline,and has 20 ms in which to process and accept/reject each event. In order to minimize systematic uncertainties, the HLT was designed from the outset to reuse the offline reconstruction and selection code, and is based around multiple independent and redunda...

  18. A readout buffer prototype for ATLAS high-level triggers

    CERN Document Server

    Calvet, D; Huet, M; Le Dû, P; Mandjavidze, I D; Mur, M

    2001-01-01

    Readout buffers are critical components in the dataflow chain of the ATLAS trigger/data-acquisition system. At up to 75 kHz, after each Level-1 trigger accept signal, these devices receive and store digitized data from groups of front-end electronic channels. Several readout buffers are grouped to form a readout buffer complex that acts as a data server for the high-level trigger selection algorithms and for the final data-collection system. This paper describes a functional prototype of a readout buffer based on a custom-made PCI mezzanine card that is designed to accept input data at up to 160 MB /s, to store up to 8 MB of data, and to distribute data chunks at the desired request rate. We describe the hardware of the card that is based on an Intel 1960 processor and complex programmable logic devices. We present the integration of several of these cards in a readout buffer complex. We measure various performance figures and discuss to which extent these can fulfil ATLAS needs. (5 refs).

  19. Multi-threading in the ATLAS High-Level Trigger

    CERN Document Server

    Barton, Adam Edward; The ATLAS collaboration

    2017-01-01

    Over the next decade of LHC data-taking the instantaneous luminosity will reach up 7.5 times the design value with over 200 interactions per bunch-crossing and will pose unprecedented challenges for the ATLAS trigger system. We report on an HLT prototype in which the need for HLT­specific components has been reduced to a minimum while retaining the key aspects of trigger functionality including regional reconstruction and early event rejection. We report on the first experience of migrating trigger algorithms to this new framework and present the next steps towards a full implementation of the ATLAS trigger within AthenaMT.

  20. TRIGGER

    CERN Multimedia

    W. Smith, from contributions of D. Acosta

    2012-01-01

      The L1 Trigger group deployed several major improvements this year. Compared to 2011, the single-muon trigger rate has been reduced by a factor of 2 and the η coverage has been restored to 2.4, with high efficiency. During the current technical stop, a higher jet seed threshold will be applied in the Global Calorimeter Trigger in order to significantly reduce the strong pile-up dependence of the HT and multi-jet triggers. The currently deployed L1 menu, with the “6E33” prescales, has a total rate of less than 100 kHz and operates with detector readout dead time of less than 3% for luminosities up to 6.5 × 1033 cm–2s–1. Further prescale sets have been created for 7 and 8 × 1033 cm–2s–1 luminosities. The L1 DPG is evaluating the performance of the Trigger for upcoming conferences and publication. Progress on the Trigger upgrade was reviewed during the May Upgrade Week. We are investigating scenarios for stagin...

  1. Development of high velocity gas gun with a new trigger system-numerical analysis

    Science.gov (United States)

    Husin, Z.; Homma, H.

    2018-02-01

    In development of high performance armor vests, we need to carry out well controlled experiments using bullet speed of more than 900 m/sec. After reviewing trigger systems used for high velocity gas guns, this research intends to develop a new trigger system, which can realize precise and reproducible impact tests at impact velocity of more than 900 m/sec. A new trigger system developed here is called a projectile trap. A projectile trap is placed between a reservoir and a barrel. A projectile trap has two functions of a sealing disk and triggering. Polyamidimide is selected for the trap material and dimensions of the projectile trap are determined by numerical analysis for several levels of launching pressure to change the projectile velocity. Numerical analysis results show that projectile trap designed here can operate reasonably and stresses caused during launching operation are less than material strength. It means a projectile trap can be reused for the next shooting.

  2. Self-triggered image intensifier tube for high-resolution UHECR imaging detector

    CERN Document Server

    Sasaki, M; Jobashi, M

    2003-01-01

    The authors have developed a self-triggered image intensifier tube with high-resolution imaging capability. An image detected by a first image intensifier tube as an electrostatic lens with a photocathode diameter of 100 mm is separated by a half-mirror into a path for CCD readout (768x494 pixels) and a fast control to recognize and trigger the image. The proposed system provides both a high signal-to-noise ratio to improve single photoelectron detection and excellent spatial resolution between 207 and 240 mu m rendering this device a potentially essential tool for high-energy physics and astrophysics experiments, as well as high-speed photography. When combined with a 1-arcmin resolution optical system with 50 deg. field-of-view proposed by the present authors, the observation of ultra high-energy cosmic rays and high-energy neutrinos using this device is expected, leading to revolutionary progress in particle astrophysics as a complementary technique to traditional astronomical observations at multiple wave...

  3. Frameworks to monitor and predict rates and resource usage in the ATLAS High Level Trigger

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219969; The ATLAS collaboration

    2017-01-01

    The ATLAS High Level Trigger Farm consists of around 40,000 CPU cores which filter events at an input rate of up to 100 kHz. A costing framework is built into the high level trigger thus enabling detailed monitoring of the system and allowing for data-driven predictions to be made utilising specialist datasets. An overview is presented in to how ATLAS collects in-situ monitoring data on CPU usage during the trigger execution, and how these data are processed to yield both low level monitoring of individual selection-algorithms and high level data on the overall performance of the farm. For development and prediction purposes, ATLAS uses a special ‘Enhanced Bias’ event selection. This mechanism is explained along with how it is used to profile expected resource usage and output event rate of new physics selections, before they are executed on the actual high level trigger farm.

  4. High voltage switch triggered by a laser-photocathode subsystem

    Science.gov (United States)

    Chen, Ping; Lundquist, Martin L.; Yu, David U. L.

    2013-01-08

    A spark gap switch for controlling the output of a high voltage pulse from a high voltage source, for example, a capacitor bank or a pulse forming network, to an external load such as a high gradient electron gun, laser, pulsed power accelerator or wide band radar. The combination of a UV laser and a high vacuum quartz cell, in which a photocathode and an anode are installed, is utilized as triggering devices to switch the spark gap from a non-conducting state to a conducting state with low delay and low jitter.

  5. Simulation of the High Performance Time to Digital Converter for the ATLAS Muon Spectrometer trigger upgrade

    International Nuclear Information System (INIS)

    Meng, X.T.; Levin, D.S.; Chapman, J.W.; Zhou, B.

    2016-01-01

    The ATLAS Muon Spectrometer endcap thin-Resistive Plate Chamber trigger project compliments the New Small Wheel endcap Phase-1 upgrade for higher luminosity LHC operation. These new trigger chambers, located in a high rate region of ATLAS, will improve overall trigger acceptance and reduce the fake muon trigger incidence. These chambers must generate a low level muon trigger to be delivered to a remote high level processor within a stringent latency requirement of 43 bunch crossings (1075 ns). To help meet this requirement the High Performance Time to Digital Converter (HPTDC), a multi-channel ASIC designed by CERN Microelectronics group, has been proposed for the digitization of the fast front end detector signals. This paper investigates the HPTDC performance in the context of the overall muon trigger latency, employing detailed behavioral Verilog simulations in which the latency in triggerless mode is measured for a range of configurations and under realistic hit rate conditions. The simulation results show that various HPTDC operational configurations, including leading edge and pair measurement modes can provide high efficiency (>98%) to capture and digitize hits within a time interval satisfying the Phase-1 latency tolerance.

  6. High-voltage Pulse-triggered SR Latch Level-Shifter Design Considerations

    DEFF Research Database (Denmark)

    Larsen, Dennis Øland; Llimos Muntal, Pere; Jørgensen, Ivan Harald Holger

    2014-01-01

    translating a signal from 0- 3 : 3 V to 87 : 5 - 100 V. The operation of this level-shifter is verified with measurements on a fabricated chip. The shortcomings of the implemented level-shifter in terms of power dissipation, transition delay, area, and startup behavior are then considered and an improved......This paper compares pulse-triggered level shifters with a traditional level-triggered topology for high-voltage ap- plications with supply voltages in the 50 V to 100 V range. It is found that the pulse-triggered SR (Set/Reset) latch level- shifter has a superior power consumption of 1800 W = MHz...... circuit is suggested which has been designed in three variants being able to translate the low-voltage 0- 3 : 3 V signal to 45 - 50 V, 85 - 90 V, and 95 - 100 V respectively. The improved 95 - 100 V level shifter achieves a considerably lower power consumption of 438 W = MHz along with a significantly...

  7. TRIGGER

    CERN Multimedia

    W. Smith from contributions of C. Leonidopoulos

    2010-01-01

    Level-1 Trigger Hardware and Software Since nearly all of the Level-1 (L1) Trigger hardware at Point 5 has been commissioned, activities during the past months focused on the fine-tuning of synchronization, particularly for the ECAL and the CSC systems, on firmware upgrades and on improving trigger operation and monitoring. Periodic resynchronizations or hard resets and a shortened luminosity section interval of 23 seconds were implemented. For the DT sector collectors, an automatic power-off was installed in case of high temperatures, and the monitoring capabilities of the opto-receivers and the mini-crates were enhanced. The DTTF and the CSCTF now have improved memory lookup tables. The HCAL trigger primitive logic implemented a new algorithm providing better stability of the energy measurement in the presence of any phase misalignment. For the Global Calorimeter Trigger, additional Source Cards have been manufactured and tested. Testing of the new tau, missing ET and missing HT algorithms is underw...

  8. A Highly Selective First-Level Muon Trigger With MDT Chamber Data for ATLAS at HL-LHC

    CERN Document Server

    INSPIRE-00390105

    2016-07-11

    Highly selective triggers are essential for the physics programme of the ATLAS experiment at HL-LHC where the instantaneous luminosity will be about an order of magnitude larger than the LHC instantaneous luminosity in Run 1. The first level muon trigger rate is dominated by low momentum muons below the nominal trigger threshold due to the moderate momentum resolution of the Resistive Plate and Thin Gap trigger chambers. The resulting high trigger rates at HL-LHC can be su?ciently reduced by using the data of the precision Muon Drift Tube chambers for the trigger decision. This requires the implementation of a fast MDT read-out chain and of a fast MDT track reconstruction algorithm with a latency of at most 6 microseconds. A hardware demonstrator of the fast read-out chain has been successfully tested at the HL-LHC operating conditions at the CERN Gamma Irradiation Facility. The fast track reconstruction algorithm has been implemented on a fast trigger processor.

  9. Reliability and Failure in NASA Missions: Blunders, Normal Accidents, High Reliability, Bad Luck

    Science.gov (United States)

    Jones, Harry W.

    2015-01-01

    NASA emphasizes crew safety and system reliability but several unfortunate failures have occurred. The Apollo 1 fire was mistakenly unanticipated. After that tragedy, the Apollo program gave much more attention to safety. The Challenger accident revealed that NASA had neglected safety and that management underestimated the high risk of shuttle. Probabilistic Risk Assessment was adopted to provide more accurate failure probabilities for shuttle and other missions. NASA's "faster, better, cheaper" initiative and government procurement reform led to deliberately dismantling traditional reliability engineering. The Columbia tragedy and Mars mission failures followed. Failures can be attributed to blunders, normal accidents, or bad luck. Achieving high reliability is difficult but possible.

  10. Using MaxCompiler for the high level synthesis of trigger algorithms

    International Nuclear Information System (INIS)

    Summers, S.; Rose, A.; Sanders, P.

    2017-01-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  11. Using MaxCompiler for the high level synthesis of trigger algorithms

    Science.gov (United States)

    Summers, S.; Rose, A.; Sanders, P.

    2017-02-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  12. Performance of ATLAS RPC Level-1 muon trigger during the 2015 data taking

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00001854; The ATLAS collaboration

    2016-01-01

    RPCs are used in the ATLAS experiment at the LHC for muon trigger in the barrel region, which corresponds to |eta|<1.05. The status of the barrel trigger system during the 2015 data taking is presented, including measurements of the RPC detector efficiencies and of the trigger performance. The RPC system has been active in more than 99.9% of the ATLAS data taking, showing very good reliability. The RPC detector efficiencies were close to Run-1 and to design value. The trigger efficiency for the high-pT thresholds used in single-muon triggers has been approximately 4% lower than in Run 1, mostly because of chambers disconnected from HV due to gas leaks. Two minor upgrades have been performed in preparation of Run 2 by adding the so-called feet and elevator chambers to increase the system acceptance. The feet chambers have been commissioned during 2015 and are included in the trigger since the last 2015 runs. Part of the elevator chambers are still in commissioning phase and will probably need a replacement ...

  13. The ALICE High Level Trigger: status and plans

    CERN Document Server

    Krzewicki, Mikolaj; Gorbunov, Sergey; Breitner, Timo; Lehrbach, Johannes; Lindenstruth, Volker; Berzano, Dario

    2015-01-01

    The ALICE High Level Trigger (HLT) is an online reconstruction, triggering and data compression system used in the ALICE experiment at CERN. Unique among the LHC experiments, it extensively uses modern coprocessor technologies like general purpose graphic processing units (GPGPU) and field programmable gate arrays (FPGA) in the data flow. Realtime data compression is performed using a cluster finder algorithm implemented on FPGA boards. These data, instead of raw clusters, are used in the subsequent processing and storage, resulting in a compression factor of around 4. Track finding is performed using a cellular automaton and a Kalman filter algorithm on GPGPU hardware, where both CUDA and OpenCL technologies can be used interchangeably. The ALICE upgrade requires further development of online concepts to include detector calibration and stronger data compression. The current HLT farm will be used as a test bed for online calibration and both synchronous and asynchronous processing frameworks already before t...

  14. Commissioning of the ATLAS High Level Trigger with single beam and cosmic rays

    International Nuclear Information System (INIS)

    Di Mattia, A

    2010-01-01

    ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). The trigger system is responsible for making the online selection of interesting collision events. At the LHC design luminosity of 10 34 cm -2 s -1 it will need to achieve a rejection factor of the order of 10 -7 against random proton-proton interactions, while selecting with high efficiency events that are needed for physics analyses. After a first processing level using custom electronics based on FPGAs and ASICs, the trigger selection is made by software running on two processor farms, containing a total of around two thousand multi-core machines. This system is known as the High Level Trigger (HLT). To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection of background events. The recent LHC startup and short single-beam run provided a 'stress test' of the system and some initial calibration data. Following this period, ATLAS continued to collect cosmic-ray events for detector alignment and calibration purposes. After giving an overview of the trigger design and its innovative features, this paper focuses on the experience gained from operating the ATLAS trigger with single LHC beams and cosmic-rays.

  15. Development of the ATLAS High-Level Trigger Steering and Inclusive Searches for Supersymmetry

    CERN Document Server

    Eifert, T

    2009-01-01

    The presented thesis is divided into two distinct parts. The subject of the first part is the ATLAS high-level trigger (HLT), in particular the development of the HLT Steering, and the trigger user-interface. The second part presents a study of inclusive supersymmetry searches, including a novel background estimation method for the relevant Standard Model (SM) processes. The trigger system of the ATLAS experiment at the Large Hadron Collider (LHC) performs the on-line physics selection in three stages: level-1 (LVL1), level-2 (LVL2), and the event filter (EF). LVL2 and EF together form the HLT. The HLT receives events containing detector data from high-energy proton (or heavy ion) collisions, which pass the LVL1 selection at a maximum rate of 75 kHz. It must reduce this rate to ~200 Hz, while retaining the most interesting physics. The HLT is a software trigger and runs on a large computing farm. At the heart of the HLT is the Steering software. The HLT Steering must reach a decision whether or not to accept ...

  16. Online Measurement of LHC Beam Parameters with the ATLAS High Level Trigger

    CERN Document Server

    Strauss, E; The ATLAS collaboration

    2011-01-01

    We present an online measurement of the LHC beam parameters in ATLAS using the High Level Trigger (HLT). When a significant change is detected in the measured beamspot, it is distributed to the HLT. There, trigger algorithms like b-tagging which calculate impact parameters or decay lengths benefit from a precise, up-to-date set of beamspot parameters. Additionally, online feedback is sent to the LHC operators in real time. The measurement is performed by an algorithm running on the Level 2 trigger farm, leveraging the high rate of usable events. Dedicated algorithms perform a full scan of the silicon detector to reconstruct event vertices from registered tracks. The distribution of these vertices is aggregated across the farm and their shape is extracted through fits every 60 seconds to determine the beamspot position, size, and tilt. The reconstructed beam values are corrected for detector resolution effects, measured in situ using the separation of vertices whose tracks have been split into two collections....

  17. Online measurement of LHC beam parameters with the ATLAS High Level Trigger

    CERN Document Server

    Strauss, E; The ATLAS collaboration

    2011-01-01

    We present an online measurement of the LHC beam parameters in ATLAS using the High Level Trigger (HLT). When a significant change is detected in the measured beamspot, it is distributed to the HLT. There, trigger algorithms like b-tagging which calculate impact parameters or decay lengths benefit from a precise,up-to-date set of beamspot parameters. Additionally, online feedback is sent to the LHC operators in real time. The measurement is performed by an algorithm running on the Level 2 trigger farm, leveraging the high rate of usable events. Dedicated algorithms perform a full scan of the silicon detector to reconstruct event vertices from registered tracks. The distribution of these vertices is aggregated across the farm and their shape is extracted through fits every 60 seconds to determine the beamspot position, size, and tilt. The reconstructed beam values are corrected for detector resolution effects, measured in situ using the separation of vertices whose tracks have been split into two collections. ...

  18. Concept of a Stand-Alone Muon Trigger with High Transverse Momentum Resolution for the ATLAS Detector at the High-Luminosity LHC

    CERN Document Server

    Horii, Yasuyuki; The ATLAS collaboration

    2014-01-01

    The ATLAS trigger uses a three-level trigger system. The level-1 (L1) trigger for muons with high transverse momentum pT in ATLAS is based on fast chambers with excellent time resolution which are able to identify muons coming from a particular beam crossing. These trigger chambers also provide a fast measurement of the muon transverse momenta, however with limited accuracy caused by the moderate spatial resolution along the deflecting direction of the magnetic field. The higher luminosity foreseen for Phase-II puts stringent limits on the L1 trigger rates. A way to control these rates is the improvement of the spatial resolution of the triggering device which drastically sharpens the turn-on curve of the L1 trigger. To do this, the precision tracking chambers (MDT) can be used in the L1 trigger, if the corresponding trigger latency is increased as planned. The trigger rate reduction is accomplished by strongly decreasing the rate of triggers from muons with pT lower than a predefined threshold (typically 20 ...

  19. Electronics and triggering challenges for the CMS High Granularity Calorimeter for HL-LHC

    CERN Document Server

    Borg, Johan

    2017-01-01

    The High Granularity Calorimeter (HGCAL) is presently being designedto replace the CMS endcap calorimeters for the HighLuminosity phase at LHC. It will feature six million silicon sensor channelsand 52 longitudinal layers. The requirements for the frontendelectronics include a 0.3 fC-10 pC dynamic range, low noise (2000 e-) and low power consumption (10 mW /channel).In addition, the HGCAL will perform 50 ps resolution time of arrivalmeasurements to combat the effect of the large number of interactions taking placeat each bunch crossing, and will transmit both triggered readoutfrom on-detector buffer memory and reduced resolution real-time trigger data.We present the challenges related to the frontend electronics, data transmissionand off-detector trigger preprocessing that must be overcome, and the designconcepts currently being pursued.

  20. ELM mitigation with pellet ELM triggering and implications for PFCs and plasma performance in ITER

    Energy Technology Data Exchange (ETDEWEB)

    Baylor, Larry R. [ORNL; Lang, P. [EURATOM / UKAEA, Abingdon, UK; Allen, S. L. [Lawrence Livermore National Laboratory (LLNL); Lasnier, C. J. [Lawrence Livermore National Laboratory (LLNL); Meitner, Steven J. [ORNL; Combs, Stephen Kirk [ORNL; Commaux, Nicolas JC [ORNL; Loarte, A. [ITER Organization, Cadarache, France; Jernigan, Thomas C. [ORNL

    2015-08-01

    The triggering of rapid small edge localized modes (ELMs) by high frequency pellet injection has been proposed as a method to prevent large naturally occurring ELMs that can erode the ITER plasma facing components (PFCs). Deuterium pellet injection has been used to successfully demonstrate the on-demand triggering of edge localized modes (ELMs) at much higher rates and with much smaller intensity than natural ELMs. The proposed hypothesis for the triggering mechanism of ELMs by pellets is the local pressure perturbation resulting from reheating of the pellet cloud that can exceed the local high-n ballooning mode threshold where the pellet is injected. Nonlinear MHD simulations of the pellet ELM triggering show destabilization of high-n ballooning modes by such a local pressure perturbation.A review of the recent pellet ELM triggering results from ASDEX Upgrade (AUG), DIII-D, and JET reveals that a number of uncertainties about this ELM mitigation technique still remain. These include the heat flux impact pattern on the divertor and wall from pellet triggered and natural ELMs, the necessary pellet size and injection location to reliably trigger ELMs, and the level of fueling to be expected from ELM triggering pellets and synergy with larger fueling pellets. The implications of these issues for pellet ELM mitigation in ITER and its impact on the PFCs are presented along with the design features of the pellet injection system for ITER.

  1. [A novel serial port auto trigger system for MOSFET dose acquisition].

    Science.gov (United States)

    Luo, Guangwen; Qi, Zhenyu

    2013-01-01

    To synchronize the radiation of microSelectron-HDR (Nucletron afterloading machine) and measurement of MOSFET dose system, a trigger system based on interface circuit was designed and corresponding monitor and trigger program were developed on Qt platform. This interface and control system was tested and showed stable operate and reliable work. This adopted serial port detect technique may expand to trigger application of other medical devices.

  2. The ATLAS Data Acquisition and High Level Trigger system

    International Nuclear Information System (INIS)

    2016-01-01

    This paper describes the data acquisition and high level trigger system of the ATLAS experiment at the Large Hadron Collider at CERN, as deployed during Run 1. Data flow as well as control, configuration and monitoring aspects are addressed. An overview of the functionality of the system and of its performance is presented and design choices are discussed.

  3. Achieving High Reliability with People, Processes, and Technology.

    Science.gov (United States)

    Saunders, Candice L; Brennan, John A

    2017-01-01

    High reliability as a corporate value in healthcare can be achieved by meeting the "Quadruple Aim" of improving population health, reducing per capita costs, enhancing the patient experience, and improving provider wellness. This drive starts with the board of trustees, CEO, and other senior leaders who ingrain high reliability throughout the organization. At WellStar Health System, the board developed an ambitious goal to become a top-decile health system in safety and quality metrics. To achieve this goal, WellStar has embarked on a journey toward high reliability and has committed to Lean management practices consistent with the Institute for Healthcare Improvement's definition of a high-reliability organization (HRO): one that is committed to the prevention of failure, early identification and mitigation of failure, and redesign of processes based on identifiable failures. In the end, a successful HRO can provide safe, effective, patient- and family-centered, timely, efficient, and equitable care through a convergence of people, processes, and technology.

  4. TRIGGER

    CERN Multimedia

    W. Smith

    2011-01-01

    Level-1 Trigger Hardware and Software Overall the L1 trigger hardware has been running very smoothly during the last months of proton running. Modifications for the heavy-ion run have been made where necessary. The maximal design rate of 100 kHz can be sustained without problems. All L1 latencies have been rechecked. The recently installed Forward Scintillating Counters (FSC) are being used in the heavy ion run. The ZDC scintillators have been dismantled, but the calorimeter itself remains. We now send the L1 accept signal and other control signals to TOTEM. Trigger cables from TOTEM to CMS will be installed during the Christmas shutdown, so that the TOTEM data can be fully integrated within the CMS readout. New beam gas triggers have been developed, since the BSC-based trigger is no longer usable at high luminosities. In particular, a special BPTX signal is used after a quiet period with no collisions. There is an ongoing campaign to provide enough spare modules for the different subsystems. For example...

  5. TRIGGER

    CERN Multimedia

    J. Alimena

    2013-01-01

    Trigger Strategy Group The Strategy for Trigger Evolution And Monitoring (STEAM) group is responsible for the development of future High-Level Trigger menus, as well as of its DQM and validation, in collaboration and with the technical support of the PdmV group. Taking into account the beam energy and luminosity expected in 2015, a rough estimate of the trigger rates indicates a factor four increase with respect to 2012 conditions. Assuming that a factor two can be tolerated thanks to the increase in offline storage and processing capabilities, a toy menu has been developed using the new OpenHLT workflow to estimate the transverse energy/momentum thresholds that would halve the current trigger rates. The CPU time needed to run the HLT has been compared between data taken with 25 ns and 50 ns bunch spacing, for equivalent pile-up: no significant difference was observed on the global time per event distribution at the only available data point, corresponding to a pile-up of about 10 interactions. Using th...

  6. Workshop on data acquisition and trigger system simulations for high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-12-31

    This report discusses the following topics: DAQSIM: A data acquisition system simulation tool; Front end and DCC Simulations for the SDC Straw Tube System; Simulation of Non-Blocklng Data Acquisition Architectures; Simulation Studies of the SDC Data Collection Chip; Correlation Studies of the Data Collection Circuit & The Design of a Queue for this Circuit; Fast Data Compression & Transmission from a Silicon Strip Wafer; Simulation of SCI Protocols in Modsim; Visual Design with vVHDL; Stochastic Simulation of Asynchronous Buffers; SDC Trigger Simulations; Trigger Rates, DAQ & Online Processing at the SSC; Planned Enhancements to MODSEM II & SIMOBJECT -- an Overview -- R.; DAGAR -- A synthesis system; Proposed Silicon Compiler for Physics Applications; Timed -- LOTOS in a PROLOG Environment: an Algebraic language for Simulation; Modeling and Simulation of an Event Builder for High Energy Physics Data Acquisition Systems; A Verilog Simulation for the CDF DAQ; Simulation to Design with Verilog; The DZero Data Acquisition System: Model and Measurements; DZero Trigger Level 1.5 Modeling; Strategies Optimizing Data Load in the DZero Triggers; Simulation of the DZero Level 2 Data Acquisition System; A Fast Method for Calculating DZero Level 1 Jet Trigger Properties and Physics Input to DAQ Studies.

  7. Workshop on data acquisition and trigger system simulations for high energy physics

    International Nuclear Information System (INIS)

    1992-01-01

    This report discusses the following topics: DAQSIM: A data acquisition system simulation tool; Front end and DCC Simulations for the SDC Straw Tube System; Simulation of Non-Blocklng Data Acquisition Architectures; Simulation Studies of the SDC Data Collection Chip; Correlation Studies of the Data Collection Circuit ampersand The Design of a Queue for this Circuit; Fast Data Compression ampersand Transmission from a Silicon Strip Wafer; Simulation of SCI Protocols in Modsim; Visual Design with vVHDL; Stochastic Simulation of Asynchronous Buffers; SDC Trigger Simulations; Trigger Rates, DAQ ampersand Online Processing at the SSC; Planned Enhancements to MODSEM II ampersand SIMOBJECT -- an Overview -- R.; DAGAR -- A synthesis system; Proposed Silicon Compiler for Physics Applications; Timed -- LOTOS in a PROLOG Environment: an Algebraic language for Simulation; Modeling and Simulation of an Event Builder for High Energy Physics Data Acquisition Systems; A Verilog Simulation for the CDF DAQ; Simulation to Design with Verilog; The DZero Data Acquisition System: Model and Measurements; DZero Trigger Level 1.5 Modeling; Strategies Optimizing Data Load in the DZero Triggers; Simulation of the DZero Level 2 Data Acquisition System; A Fast Method for Calculating DZero Level 1 Jet Trigger Properties and Physics Input to DAQ Studies

  8. The CMS High Level Trigger System: Experience and Future Development

    CERN Document Server

    Bauer, Gerry; Bowen, Matthew; Branson, James G; Bukowiec, Sebastian; Cittolin, Sergio; Coarasa, J A; Deldicque, Christian; Dobson, Marc; Dupont, Aymeric; Erhan, Samim; Flossdorf, Alexander; Gigi, Dominique; Glege, Frank; Gomez-Reino, R; Hartl, Christian; Hegeman, Jeroen; Holzner, André; Y L Hwong; Masetti, Lorenzo; Meijers, Frans; Meschi, Emilio; Mommsen, R K; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph; Petrucci, Andrea; Pieri, Marco; Polese, Giovanni; Racz, Attila; Raginel, Olivier; Sakulin, Hannes; Sani, Matteo; Schwick, Christoph; Shpakov, Dennis; Simon, M; Spataru, A C; Sumorok, Konstanty

    2012-01-01

    The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.

  9. A read-out buffer prototype for ATLAS high level triggers

    CERN Document Server

    Calvet, D; Huet, M; Le Dû, P; Mandjavidze, I D; Mur, M

    2000-01-01

    Read-Out Buffers are critical components in the dataflow chain of the ATLAS Trigger/DAQ system. At up to 75 kHz, after each Level-1 trigger accept signal, these devices receive and store digitized data from groups of front-end electronic channels. Several Read-Out Buffers are grouped to form a Read-Out Buffer Complex that acts as a data server for the High Level Triggers selection algorithms and for the final data collection system. This paper describes a functional prototype of a Read-Out Buffer based on a custom made PCI mezzanine card that is designed to accept input data at up to 160 MB/s, to store up to 8 MB of data and to distribute data chunks at the desired request rate. We describe the hardware of the card that is based on an Intel I960 processor and CPLDs. We present the integration of several of these cards in a Read-Out Buffer Complex. We measure various performance figures and we discuss to which extent these can fulfill ATLAS needs. 5 Refs.

  10. Commissioning of the ATLAS High Level Trigger with single beam and cosmic rays

    Energy Technology Data Exchange (ETDEWEB)

    Di Mattia, A, E-mail: dimattia@mail.cern.c [Michigan State University - Department of Physics and Astronomy 3218 Biomedical Physical Science - East Lansing, MI 48824-2320 (United States)

    2010-04-01

    ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). The trigger system is responsible for making the online selection of interesting collision events. At the LHC design luminosity of 10{sup 34} cm{sup -2}s{sup -1} it will need to achieve a rejection factor of the order of 10{sup -7} against random proton-proton interactions, while selecting with high efficiency events that are needed for physics analyses. After a first processing level using custom electronics based on FPGAs and ASICs, the trigger selection is made by software running on two processor farms, containing a total of around two thousand multi-core machines. This system is known as the High Level Trigger (HLT). To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection of background events. The recent LHC startup and short single-beam run provided a 'stress test' of the system and some initial calibration data. Following this period, ATLAS continued to collect cosmic-ray events for detector alignment and calibration purposes. After giving an overview of the trigger design and its innovative features, this paper focuses on the experience gained from operating the ATLAS trigger with single LHC beams and cosmic-rays.

  11. A real-time high level trigger system for CALIFA

    Energy Technology Data Exchange (ETDEWEB)

    Gernhaeuser, Roman; Heiss, Benjamin; Klenze, Philipp; Remmels, Patrick; Winkel, Max [Physik Department, Technische Universitaet Muenchen (Germany)

    2016-07-01

    The CALIFA calorimeter with its about 2600 scintillator crystals is a key component of the R{sup 3}B setup. For many experiments CALIFA will have to perform complex trigger decisions depending on the total energy deposition, γ multiplicities or geometrical patterns with a minimal latency. This selection is an essential tool for the accurate preselection of relevant events and provides a significant data reduction. The challenge is to aggregate local trigger information from up to 200 readout modules. The trigger tree transport protocol (T{sup 3}P) will use dedicated FPGA boards and bus systems to collect trigger information and perform hierarchical summations to ensure a trigger decision within 1 μs. The basic concept and implementation of T{sup 3}P are presented together with first tests on a prototype system.

  12. High power klystrons for efficient reliable high power amplifiers

    Science.gov (United States)

    Levin, M.

    1980-11-01

    This report covers the design of reliable high efficiency, high power klystrons which may be used in both existing and proposed troposcatter radio systems. High Power (10 kW) klystron designs were generated in C-band (4.4 GHz to 5.0 GHz), S-band (2.5 GHz to 2.7 GHz), and L-band or UHF frequencies (755 MHz to 985 MHz). The tubes were designed for power supply compatibility and use with a vapor/liquid phase heat exchanger. Four (4) S-band tubes were developed in the course of this program along with two (2) matching focusing solenoids and two (2) heat exchangers. These tubes use five (5) tuners with counters which are attached to the focusing solenoids. A reliability mathematical model of the tube and heat exchanger system was also generated.

  13. High level trigger system for the ALICE experiment

    International Nuclear Information System (INIS)

    Frankenfeld, U.; Roehrich, D.; Ullaland, K.; Vestabo, A.; Helstrup, H.; Lien, J.; Lindenstruth, V.; Schulz, M.; Steinbeck, T.; Wiebalck, A.; Skaali, B.

    2001-01-01

    The ALICE experiment at the Large Hadron Collider (LHC) at CERN will detect up to 20,000 particles in a single Pb-Pb event resulting in a data rate of ∼75 MByte/event. The event rate is limited by the bandwidth of the data storage system. Higher rates are possible by selecting interesting events and subevents (High Level trigger) or compressing the data efficiently with modeling techniques. Both require a fast parallel pattern recognition. One possible solution to process the detector data at such rates is a farm of clustered SMP nodes, based on off-the-shelf PCs, and connected by a high bandwidth, low latency network

  14. Real-time TPC analysis with the ALICE High-Level Trigger

    International Nuclear Information System (INIS)

    Lindenstruth, V.; Loizides, C.; Roehrich, D.; Skaali, B.; Steinbeck, T.; Stock, R.; Tilsner, H.; Ullaland, K.; Vestboe, A.; Vik, T.

    2004-01-01

    The ALICE High-Level Trigger processes data online, to either select interesting (sub-) events, or to compress data efficiently by modeling techniques. Focusing on the main data source, the Time Projection Chamber, the architecture of the system and the current state of the tracking and compression methods are outlined

  15. Development, validation and integration of the ATLAS Trigger System software in Run 2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00377077; The ATLAS collaboration

    2017-01-01

    The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware, and software, associated to various sub-detectors that must seamlessly cooperate in order to select one collision of interest out of every 40,000 delivered by the LHC every millisecond. These proceedings discuss the challenges, organization and work flow of the ongoing trigger software development, validation, and deployment. The goal of this development is to ensure that the most up-to-date algorithms are used to optimize the performance of the experiment. The goal of the validation is to ensure the reliability and predictability of the software performance. Integration tests are carried out to ensure that the software deployed to the online trigger farm during data-taking run as desired. Trigger software is validated by emulating online conditions using a benchmark run and mimicking the reconstruction that occurs during normal data-taking. This exercise is computationally demanding and thus runs on the ATLAS high per...

  16. L1Track: A fast Level 1 track trigger for the ATLAS high luminosity upgrade

    International Nuclear Information System (INIS)

    Cerri, Alessandro

    2016-01-01

    With the planned high-luminosity upgrade of the LHC (HL-LHC), the ATLAS detector will see its collision rate increase by approximately a factor of 5 with respect to the current LHC operation. The earliest hardware-based ATLAS trigger stage (“Level 1”) will have to provide a higher rejection factor in a more difficult environment: a new improved Level 1 trigger architecture is under study, which includes the possibility of extracting with low latency and high accuracy tracking information in time for the decision taking process. In this context, the feasibility of potential approaches aimed at providing low-latency high-quality tracking at Level 1 is discussed. - Highlights: • HL-LH requires highly performing event selection. • ATLAS is studying the implementation of tracking at the very first trigger level. • Low latency and high-quality seem to be achievable with dedicated hardware and adequate detector readout architecture.

  17. The CMS High Level Trigger System

    CERN Document Server

    Afaq, A; Bauer, G; Biery, K; Boyer, V; Branson, J; Brett, A; Cano, E; Carboni, A; Cheung, H; Ciganek, M; Cittolin, S; Dagenhart, W; Erhan, S; Gigi, D; Glege, F; Gómez-Reino, Robert; Gulmini, M; Gutiérrez-Mlot, E; Gutleber, J; Jacobs, C; Kim, J C; Klute, M; Kowalkowski, J; Lipeles, E; Lopez-Perez, Juan Antonio; Maron, G; Meijers, F; Meschi, E; Moser, R; Murray, S; Oh, A; Orsini, L; Paus, C; Petrucci, A; Pieri, M; Pollet, L; Rácz, A; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Sexton-Kennedy, E; Sumorok, K; Suzuki, I; Tsirigkas, D; Varela, J

    2007-01-01

    The CMS Data Acquisition (DAQ) System relies on a purely software driven High Level Trigger (HLT) to reduce the full Level-1 accept rate of 100 kHz to approximately 100 Hz for archiving and later offline analysis. The HLT operates on the full information of events assembled by an event builder collecting detector data from the CMS front-end systems. The HLT software consists of a sequence of reconstruction and filtering modules executed on a farm of O(1000) CPUs built from commodity hardware. This paper presents the architecture of the CMS HLT, which integrates the CMS reconstruction framework in the online environment. The mechanisms to configure, control, and monitor the Filter Farm and the procedures to validate the filtering code within the DAQ environment are described.

  18. Pulsed laser triggered high speed microfluidic switch

    Science.gov (United States)

    Wu, Ting-Hsiang; Gao, Lanyu; Chen, Yue; Wei, Kenneth; Chiou, Pei-Yu

    2008-10-01

    We report a high-speed microfluidic switch capable of achieving a switching time of 10 μs. The switching mechanism is realized by exciting dynamic vapor bubbles with focused laser pulses in a microfluidic polydimethylsiloxane (PDMS) channel. The bubble expansion deforms the elastic PDMS channel wall and squeezes the adjacent sample channel to control its fluid and particle flows as captured by the time-resolved imaging system. A switching of polystyrene microspheres in a Y-shaped channel has also been demonstrated. This ultrafast laser triggered switching mechanism has the potential to advance the sorting speed of state-of-the-art microscale fluorescence activated cell sorting devices.

  19. Design of robust reliable control for T-S fuzzy Markovian jumping delayed neutral type neural networks with probabilistic actuator faults and leakage delays: An event-triggered communication scheme.

    Science.gov (United States)

    Syed Ali, M; Vadivel, R; Saravanakumar, R

    2018-06-01

    This study examines the problem of robust reliable control for Takagi-Sugeno (T-S) fuzzy Markovian jumping delayed neural networks with probabilistic actuator faults and leakage terms. An event-triggered communication scheme. First, the randomly occurring actuator faults and their failures rates are governed by two sets of unrelated random variables satisfying certain probabilistic failures of every actuator, new type of distribution based event triggered fault model is proposed, which utilize the effect of transmission delay. Second, Takagi-Sugeno (T-S) fuzzy model is adopted for the neural networks and the randomness of actuators failures is modeled in a Markov jump model framework. Third, to guarantee the considered closed-loop system is exponential mean square stable with a prescribed reliable control performance, a Markov jump event-triggered scheme is designed in this paper, which is the main purpose of our study. Fourth, by constructing appropriate Lyapunov-Krasovskii functional, employing Newton-Leibniz formulation and integral inequalities, several delay-dependent criteria for the solvability of the addressed problem are derived. The obtained stability criteria are stated in terms of linear matrix inequalities (LMIs), which can be checked numerically using the effective LMI toolbox in MATLAB. Finally, numerical examples are given to illustrate the effectiveness and reduced conservatism of the proposed results over the existing ones, among them one example was supported by real-life application of the benchmark problem. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Prompt triggering of edge localized modes through lithium granule injection on EAST

    Science.gov (United States)

    Lunsford, Robert; Sun, Z.; Hu, J. S.; Xu, W.; Zuo, G. Z.; Gong, X. Z.; Wan, B. N.; Li, J. G.; Huang, M.; Maingi, R.; Diallo, A.; Tritz, K.; the EAST Team

    2017-10-01

    We report successful triggering of edge localized mode (ELMs) in EAST with Lithium (Li) micropellets, and the observed dependence of ELM triggering efficiency on granule size. ELM control is essential for successful ITER operation throughout the entire campaign, relying on magnetic perturbations for ELM suppression and ELM frequency enhancement via pellet injection. To separate the task of fueling from ELM pacing, we initiate the prompt generation of ELMs via impurity granule injection. Lithium granules ranging in size from 200 - 1000 microns are mechanically injected into upper-single null EAST long pulse H-mode discharges. The injections are monitored for their effect on high Z impurity accumulation and to assess the pressure perturbation required for reliable ELM triggering. We have determined that granules of diameter larger than 600 microns (corresponding to 5.2 x 1018 Li atoms) are successful at triggering ELMs more than 90% of the time. The triggering efficiency drops precipitously to less than 40% as the granule size is reduced to 400 microns (1.5 x 1018 Li atoms), indicating a triggering threshold has been crossed. Using this information an optimal impurity granule size which will regularly trigger a prompt ELM in these EAST discharges is determined. Coupling these results with alternate discharge scenarios on EAST and similar experiments performed on DIII-D provides the possibility of extrapolation to future devices.

  1. Online measurement of LHC beam parameters with the ATLAS High Level Trigger

    International Nuclear Information System (INIS)

    Strauss, E

    2012-01-01

    We present an online measurement of the LHC beamspot parameters in ATLAS using the High Level Trigger (HLT). When a significant change is detected in the measured beamspot, it is distributed to the HLT. There, trigger algorithms like b-tagging which calculate impact parameters or decay lengths benefit from a precise, up-to-date set of beamspot parameters. Additionally, online feedback is sent to the LHC operators in real time. The measurement is performed by an algorithm running on the Level 2 trigger farm, leveraging the high rate of usable events. Dedicated algorithms perform a full scan of the silicon detector to reconstruct event vertices from registered tracks. The distribution of these vertices is aggregated across the farm and their shape is extracted through fits every 60 seconds to determine the beamspot position, size, and tilt. The reconstructed beamspot values are corrected for detector resolution effects, measured in situ using the separation of vertices whose tracks have been split into two collections. Furthermore, measurements for individual bunch crossings have allowed for studies of single-bunch distributions as well as the behavior of bunch trains. This talk will cover the constraints imposed by the online environment and describe how these measurements are accomplished with the given resources. The algorithm tasks must be completed within the time constraints of the Level 2 trigger, with limited CPU and bandwidth allocations. This places an emphasis on efficient algorithm design and the minimization of data requests.

  2. BTeV Trigger

    International Nuclear Information System (INIS)

    Gottschalk, Erik E.

    2006-01-01

    BTeV was designed to conduct precision studies of CP violation in BB-bar events using a forward-geometry detector in a hadron collider. The detector was optimized for high-rate detection of beauty and charm particles produced in collisions between protons and antiprotons. The trigger was designed to take advantage of the main difference between events with beauty and charm particles and more typical hadronic events-the presence of detached beauty and charm decay vertices. The first stage of the BTeV trigger was to receive data from a pixel vertex detector, reconstruct tracks and vertices for every beam crossing, reject at least 98% of beam crossings in which neither beauty nor charm particles were produced, and trigger on beauty events with high efficiency. An overview of the trigger design and its evolution to include commodity networking and computing components is presented

  3. CMS Trigger Performance

    CERN Document Server

    Donato, Silvio

    2017-01-01

    During its second run of operation (Run 2) which started in 2015, the LHC will deliver a peak instantaneous luminosity that may reach $2 \\cdot 10^{34}$ cm$^{-2}$s$^{-1}$ with an average pile-up of about 55, far larger than the design value. Under these conditions, the online event selection is a very challenging task. In CMS, it is realized by a two-level trigger system the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm. In order to face this challenge, the L1 trigger has been through a major upgrade compared to Run 1, whereby all electronic boards of the system have been replaced, allowing more sophisticated algorithms to be run online. Its last stage, the global trigger, is now able to perform complex selections and to compute high-level quantities, like invariant masses. Likewise, the algorithms that run in the HLT go through big improvements; in particular, new appr...

  4. Commissioning of the ATLAS high-level trigger with single beam and cosmic rays

    CERN Document Server

    Özcan, V Erkcan

    2010-01-01

    ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). Using fast reconstruction algorithms, its trigger system needs to efficiently reject a huge rate of background events and still select potentially interesting ones with good efficiency. After a first processing level using custom electronics, the trigger selection is made by software running on two processor farms, designed to have a total of around two thousand multi-core machines. This system is known as the High Level Trigger (HLT). To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection of background events. The recent LHC startup and short single-beam run provided a "stress test" of the trigger. Following this period, ATLAS continued to collect cosmic-ray events for detector alignment and calibration purposes. These running periods allowed strict tests of the HLT reconstruction and selection algorithms as we...

  5. High-reliability computing for the smarter planet

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Graham, Paul; Manuzzato, Andrea; Dehon, Andre

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary

  6. High-reliability computing for the smarter planet

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Graham, Paul [Los Alamos National Laboratory; Manuzzato, Andrea [UNIV OF PADOVA; Dehon, Andre [UNIV OF PENN; Carter, Nicholas [INTEL CORPORATION

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is

  7. Performance of the ATLAS Muon Trigger and Phase-1 Upgrade of Level-1 Endcap Muon Trigger

    CERN Document Server

    Mizukami, Atsushi; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment utilises a trigger system to efficiently record interesting events. It consists of first-level and high-level triggers. The first-level trigger is implemented with custom-built hardware to reduce the event rate from 40 MHz to100 kHz. Then the software-based high-level triggers refine the trigger decisions reducing the output rate down to 1 kHz. Events with muons in the final state are an important signature for many physics topics at the LHC. An efficient trigger on muons and a detailed understanding of its performance are required. Trigger efficiencies are, for example, obtained from the muon decay of Z boson, with a Tag&Probe method, using proton-proton collision data collected in 2016 at a centre-of-mass energy of 13 TeV. The LHC is expected to increase its instantaneous luminosity to $3\\times10^{34} \\rm{cm^{-2}s^{-1}}$ after the phase-1 upgrade between 2018-2020. The upgrade of the ATLAS trigger system is mandatory to cope with this high-luminosity. In the phase-1 upgrade, new det...

  8. Challenges of front-end and triggering electronics for High Granularity Calorimetry

    CERN Document Server

    Puljak, Ivica

    2017-01-01

    A high granularity calorimeter is presently being designed by the CMS Collaboration to replace the existing endcap detectors. It must be able to cope with the very high collision rates, imposing the development of novel filtering and triggering strategies, as well as with the harsh radiation environment of the high-luminosity LHC. In this paper we present an overview of the full electronics architecture and the performance of prototype components and algorithms.

  9. A 16 channel discriminator VME board with enhanced triggering capabilities

    International Nuclear Information System (INIS)

    Borsato, E; Garfagnini, A; Menon, G

    2012-01-01

    Electronics and data acquisition systems used in small and large scale laboratories often have to handle analog signals with varying polarity, amplitude and duration which have to be digitized to be used as trigger signals to validate the acquired data. In the specific case of experiments dealing with ionizing radiation, ancillary particle detectors (for instance plastic scintillators or Resistive Plate Chambers) are used to trigger and select the impinging particles for the experiment. A novel approach using commercial LVDS line receivers as discriminator devices is presented. Such devices, with a proper calibration, can handle positive and negative analog signals in a wide dynamic range (from 20 mV to 800 mV signal amplitude). The clear advantages, with respect to conventional discriminator devices, are reduced costs, high reliability of a mature technology and the possibility of high integration scale. Moreover, commercial discriminator boards with positive input signal and a wide threshold swing are not available on the market. The present paper describes the design and characterization of a VME board capable to handle 16 differential or single-ended input channels. The output digital signals, available independently for each input, can be combined in the board into three independent trigger logic units which provide additional outputs for the end user.

  10. A 16 channel discriminator VME board with enhanced triggering capabilities

    Science.gov (United States)

    Borsato, E.; Garfagnini, A.; Menon, G.

    2012-08-01

    Electronics and data acquisition systems used in small and large scale laboratories often have to handle analog signals with varying polarity, amplitude and duration which have to be digitized to be used as trigger signals to validate the acquired data. In the specific case of experiments dealing with ionizing radiation, ancillary particle detectors (for instance plastic scintillators or Resistive Plate Chambers) are used to trigger and select the impinging particles for the experiment. A novel approach using commercial LVDS line receivers as discriminator devices is presented. Such devices, with a proper calibration, can handle positive and negative analog signals in a wide dynamic range (from 20 mV to 800 mV signal amplitude). The clear advantages, with respect to conventional discriminator devices, are reduced costs, high reliability of a mature technology and the possibility of high integration scale. Moreover, commercial discriminator boards with positive input signal and a wide threshold swing are not available on the market. The present paper describes the design and characterization of a VME board capable to handle 16 differential or single-ended input channels. The output digital signals, available independently for each input, can be combined in the board into three independent trigger logic units which provide additional outputs for the end user.

  11. Triggering soft bombs at the LHC

    Science.gov (United States)

    Knapen, Simon; Griso, Simone Pagan; Papucci, Michele; Robinson, Dean J.

    2017-08-01

    Very high multiplicity, spherically-symmetric distributions of soft particles, with p T ˜ few×100 MeV, may be a signature of strongly-coupled hidden valleys that exhibit long, efficient showering windows. With traditional triggers, such `soft bomb' events closely resemble pile-up and are therefore only recorded with minimum bias triggers at a very low efficiency. We demonstrate a proof-of-concept for a high-level triggering strategy that efficiently separates soft bombs from pile-up by searching for a `belt of fire': a high density band of hits on the innermost layer of the tracker. Seeding our proposed high-level trigger with existing jet, missing transverse energy or lepton hardware-level triggers, we show that net trigger efficiencies of order 10% are possible for bombs of mass several × 100 GeV. We also consider the special case that soft bombs are the result of an exotic decay of the 125 GeV Higgs. The fiducial rate for `Higgs bombs' triggered in this manner is marginally higher than the rate achievable by triggering directly on a hard muon from associated Higgs production.

  12. Using the CMS high level trigger as a cloud resource

    International Nuclear Information System (INIS)

    Colling, David; Huffman, Adam; Bauer, Daniela; McCrae, Alison; Cinquilli, Mattia; Gowdy, Stephen; Coarasa, Jose Antonio; Ozga, Wojciech; Chaze, Olivier; Lahiff, Andrew; Grandi, Claudio; Tiradani, Anthony; Sgaravatto, Massimo

    2014-01-01

    The CMS High Level Trigger is a compute farm of more than 10,000 cores. During data taking this resource is heavily used and is an integral part of the experiment's triggering system. However, outside of data taking periods this resource is largely unused. We describe why CMS wants to use the HLT as a cloud resource (outside of data taking periods) and how this has been achieved. In doing this we have turned a single-use cluster into an agile resource for CMS production computing. While we are able to use the HLT as a production cloud resource, there is still considerable further work that CMS needs to carry out before this resource can be used with the desired agility. This report, therefore, represents a snapshot of this activity at the time of CHEP 2013.

  13. Column Grid Array Rework for High Reliability

    Science.gov (United States)

    Mehta, Atul C.; Bodie, Charles C.

    2008-01-01

    Due to requirements for reduced size and weight, use of grid array packages in space applications has become common place. To meet the requirement of high reliability and high number of I/Os, ceramic column grid array packages (CCGA) were selected for major electronic components used in next MARS Rover mission (specifically high density Field Programmable Gate Arrays). ABSTRACT The probability of removal and replacement of these devices on the actual flight printed wiring board assemblies is deemed to be very high because of last minute discoveries in final test which will dictate changes in the firmware. The questions and challenges presented to the manufacturing organizations engaged in the production of high reliability electronic assemblies are, Is the reliability of the PWBA adversely affected by rework (removal and replacement) of the CGA package? and How many times can we rework the same board without destroying a pad or degrading the lifetime of the assembly? To answer these questions, the most complex printed wiring board assembly used by the project was chosen to be used as the test vehicle, the PWB was modified to provide a daisy chain pattern, and a number of bare PWB s were acquired to this modified design. Non-functional 624 pin CGA packages with internal daisy chained matching the pattern on the PWB were procured. The combination of the modified PWB and the daisy chained packages enables continuity measurements of every soldered contact during subsequent testing and thermal cycling. Several test vehicles boards were assembled, reworked and then thermal cycled to assess the reliability of the solder joints and board material including pads and traces near the CGA. The details of rework process and results of thermal cycling are presented in this paper.

  14. Development, Validation and Integration of the ATLAS Trigger System Software in Run 2

    Science.gov (United States)

    Keyes, Robert; ATLAS Collaboration

    2017-10-01

    The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware, and software, associated to various sub-detectors that must seamlessly cooperate in order to select one collision of interest out of every 40,000 delivered by the LHC every millisecond. These proceedings discuss the challenges, organization and work flow of the ongoing trigger software development, validation, and deployment. The goal of this development is to ensure that the most up-to-date algorithms are used to optimize the performance of the experiment. The goal of the validation is to ensure the reliability and predictability of the software performance. Integration tests are carried out to ensure that the software deployed to the online trigger farm during data-taking run as desired. Trigger software is validated by emulating online conditions using a benchmark run and mimicking the reconstruction that occurs during normal data-taking. This exercise is computationally demanding and thus runs on the ATLAS high performance computing grid with high priority. Performance metrics ranging from low-level memory and CPU requirements, to distributions and efficiencies of high-level physics quantities are visualized and validated by a range of experts. This is a multifaceted critical task that ties together many aspects of the experimental effort and thus directly influences the overall performance of the ATLAS experiment.

  15. Trigger design for a gamma ray detector of HIRFL-ETF

    Science.gov (United States)

    Du, Zhong-Wei; Su, Hong; Qian, Yi; Kong, Jie

    2013-10-01

    The Gamma Ray Array Detector (GRAD) is one subsystem of HIRFL-ETF (the External Target Facility (ETF) of the Heavy Ion Research Facility in Lanzhou (HIRFL)). It is capable of measuring the energy of gamma-rays with 1024 CsI scintillators in in-beam nuclear experiments. The GRAD trigger should select the valid events and reject the data from the scintillators which are not hit by the gamma-ray. The GRAD trigger has been developed based on the Field Programmable Gate Array (FPGAs) and PXI interface. It makes prompt trigger decisions to select valid events by processing the hit signals from the 1024 CsI scintillators. According to the physical requirements, the GRAD trigger module supplies 12-bit trigger information for the global trigger system of ETF and supplies a trigger signal for data acquisition (DAQ) system of GRAD. In addition, the GRAD trigger generates trigger data that are packed and transmitted to the host computer via PXI bus to be saved for off-line analysis. The trigger processing is implemented in the front-end electronics of GRAD and one FPGA of the GRAD trigger module. The logic of PXI transmission and reconfiguration is implemented in another FPGA of the GRAD trigger module. During the gamma-ray experiments, the GRAD trigger performs reliably and efficiently. The function of GRAD trigger is capable of satisfying the physical requirements.

  16. Trigger design for a gamma ray detector of HIRFL-ETF

    International Nuclear Information System (INIS)

    Du Zhongwei; Su Hong; Qian Yi; Kong Jie

    2013-01-01

    The Gamma Ray Array Detector (GRAD) is one subsystem of HIRFL-ETF (the External Target Facility (ETF) of the Heavy Ion Research Facility in Lanzhou (HIRFL)). It is capable of measuring the energy of gamma-rays with 1024 CsI scintillators in in-beam nuclear experiments. The GRAD trigger should select the valid events and reject the data from the scintillators which are not hit by the gamma-ray. The GRAD trigger has been developed based on the Field Programmable Gate Array (FPGAs) and PXI interface. It makes prompt trigger decisions to select valid events by processing the hit signals from the 1024 CsI scintillators. According to the physical requirements, the GRAD trigger module supplies 12-bit trigger information for the global trigger system of ETF and supplies a trigger signal for data acquisition (DAQ) system of GRAD. In addition, the GRAD trigger generates trigger data that are packed and transmitted to the host computer via PXI bus to be saved for off-line analysis. The trigger processing is implemented in the front-end electronics of GRAD and one FPGA of the GRAD trigger module. The logic of PXI transmission and reconfiguration is implemented in another FPGA of the GRAD trigger module. During the gamma-ray experiments, the GRAD trigger performs reliably and efficiently. The function of GRAD trigger is capable of satisfying the physical requirements. (authors)

  17. Delivering high performance BWR fuel reliably

    International Nuclear Information System (INIS)

    Schardt, J.F.

    1998-01-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  18. Highly reliable electro-hydraulic control system

    International Nuclear Information System (INIS)

    Mande, Morima; Hiyama, Hiroshi; Takahashi, Makoto

    1984-01-01

    The unscheduled shutdown of nuclear power stations disturbs power system, and exerts large influence on power generation cost due to the lowering of capacity ratio; therefore, high reliability is required for the control system of nuclear power stations. Toshiba Corp. has exerted effort to improve the reliability of the control system of power stations, and in this report, the electro-hydraulic control system for the turbines of nuclear power stations is described. The main functions of the electro-hydraulic control system are the control of main steam pressure with steam regulation valves and turbine bypass valves, the control of turbine speed and load, the prevention of turbine overspeed, the protection of turbines and so on. The system is composed of pressure sensors and a speed sensor, the control board containing the electronic circuits for control computation and protective sequence, the oil cylinders, servo valves and opening detectors of the valves for control, a high pressure oil hydraulic machine and piping, the operating panel and so on. The main features are the adoption of tripling intermediate value selection method, the multiplying of protection sensors and the adoption of 2 out of 3 trip logic, the multiplying of power sources, the improvement of the reliability of electronic circuit hardware and oil hydraulic system. (Kako, I.)

  19. Can Pulsed Electromagnetic Fields Trigger On-Demand Drug Release from High-Tm Magnetoliposomes?

    Directory of Open Access Journals (Sweden)

    Martina Nardoni

    2018-03-01

    Full Text Available Recently, magnetic nanoparticles (MNPs have been used to trigger drug release from magnetoliposomes through a magneto-nanomechanical approach, where the mechanical actuation of the MNPs is used to enhance the membrane permeability. This result can be effectively achieved with low intensity non-thermal alternating magnetic field (AMF, which, however, found rare clinic application. Therefore, a different modality of generating non-thermal magnetic fields has now been investigated. Specifically, the ability of the intermittent signals generated by non-thermal pulsed electromagnetic fields (PEMFS were used to verify if, once applied to high-transition temperature magnetoliposomes (high-Tm MLs, they could be able to efficiently trigger the release of a hydrophilic model drug. To this end, hydrophilic MNPs were combined with hydrogenated soybean phosphatidylcholine and cholesterol to design high-Tm MLs. The release of a dye was evaluated under the effect of PEMFs for different times. The MNPs motions produced by PEMF could effectively increase the bilayer permeability, without affecting the liposomes integrity and resulted in nearly 20% of release after 3 h exposure. Therefore, the current contribution provides an exciting proof-of-concept for the ability of PEMFS to trigger drug release, considering that PEMFS find already application in therapy due to their anti-inflammatory effects.

  20. Can Pulsed Electromagnetic Fields Trigger On-Demand Drug Release from High-Tm Magnetoliposomes?

    Science.gov (United States)

    Nardoni, Martina; Della Valle, Elena; Liberti, Micaela; Relucenti, Michela; Casadei, Maria Antonietta; Paolicelli, Patrizia; Apollonio, Francesca; Petralito, Stefania

    2018-03-27

    Recently, magnetic nanoparticles (MNPs) have been used to trigger drug release from magnetoliposomes through a magneto-nanomechanical approach, where the mechanical actuation of the MNPs is used to enhance the membrane permeability. This result can be effectively achieved with low intensity non-thermal alternating magnetic field (AMF), which, however, found rare clinic application. Therefore, a different modality of generating non-thermal magnetic fields has now been investigated. Specifically, the ability of the intermittent signals generated by non-thermal pulsed electromagnetic fields (PEMFS) were used to verify if, once applied to high-transition temperature magnetoliposomes (high-Tm MLs), they could be able to efficiently trigger the release of a hydrophilic model drug. To this end, hydrophilic MNPs were combined with hydrogenated soybean phosphatidylcholine and cholesterol to design high-Tm MLs. The release of a dye was evaluated under the effect of PEMFs for different times. The MNPs motions produced by PEMF could effectively increase the bilayer permeability, without affecting the liposomes integrity and resulted in nearly 20% of release after 3 h exposure. Therefore, the current contribution provides an exciting proof-of-concept for the ability of PEMFS to trigger drug release, considering that PEMFS find already application in therapy due to their anti-inflammatory effects.

  1. Simulation studies for optimizing the trigger generation criteria for the TACTIC telescope

    International Nuclear Information System (INIS)

    Koul, M.K.; Tickoo, A.K.; Dhar, V.K.; Venugopal, K.; Chanchalani, K.; Rannot, R.C.; Yadav, K.K.; Chandra, P.; Kothari, M.; Koul, R.

    2011-01-01

    In this paper, we present the results of Monte Carlo simulations of γ-ray and cosmic-ray proton induced extensive air showers as detected by the TACTIC atmospheric Cherenkov imaging telescope for optimizing its trigger field of view and topological trigger generation scheme. The simulation study has been carried out at several zenith angles. The topological trigger generation uses a coincidence of two or three nearest neighbor pixels for producing an event trigger. The results of this study suggest that a trigger field of 11x11 pixels (∼3.4 0 x3.4 0 ) is quite optimum for achieving maximum effective collection area for γ-rays from a point source. With regard to optimization of topological trigger generation, it is found that both two and three nearest neighbor pixels yield nearly similar results up to a zenith angle of 25 0 with a threshold energy of ∼1.5TeV for γ-rays. Beyond zenith angle of 25 0 , the results suggest that a two-pixel nearest neighbor trigger should be preferred. Comparison of the simulated integral rates has also been made with corresponding measured values for validating the predictions of the Monte Carlo simulations, especially the effective collection area, so that energy spectra of sources (or flux upper limits in case of no detection) can be determined reliably. Reasonably good matching of the measured trigger rates (on the basis of ∼207h of data collected with the telescope in NN-2 and NN-3 trigger configurations) with that obtained from simulations reassures that the procedure followed by us in estimating the threshold energy and detection rates is quite reliable. - Highlights: → Optimization of the trigger field of view and topological trigger generation for the TACTIC telescope. → Monte Carlo simulations of extensive air showers carried out using CORSIKA code. → Trigger generation with two or three nearest neighbor pixels yield similar results up to a zenith angle of 25 deg. → Reasonably good matching of measured trigger

  2. Recent experience and future evolution of the CMS High Level Trigger System

    CERN Document Server

    Bauer, Gerry; Branson, James; Bukowiec, Sebastian Czeslaw; Chaze, Olivier; Cittolin, Sergio; Coarasa Perez, Jose Antonio; Deldicque, Christian; Dobson, Marc; Dupont, Aymeric; Erhan, Samim; Gigi, Dominique; Glege, Frank; Gomez-Reino Garrido, Robert; Hartl, Christian; Holzner, Andre Georg; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Nunez Barranco Fernandez, Carlos; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrucci, Andrea; Pieri, Marco; Polese, Giovanni; Racz, Attila; Raginel, Olivier; Sakulin, Hannes; Sani, Matteo; Schwick, Christoph; Spataru, Andrei Cristian; Stoeckli, Fabian; Sumorok, Konstanty

    2012-01-01

    The CMS experiment at the LHC uses a two-stage trigger system, with events flowing from the first level trigger at a rate of 100 kHz. These events are read out by the Data Acquisition system (DAQ), assembled in memory in a farm of computers, and finally fed into the high-level trigger (HLT) software running on the farm. The HLT software selects interesting events for offline storage and analysis at a rate of a few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the 2010-2011 collider run is detailed, as well as the current architecture of the CMS HLT, and its integration with the CMS reconstruction framework and CMS DAQ. The short- and medium-term evolution of the HLT software infrastructure is discussed, with future improvements aimed at supporting extensions of the HLT computing power, and addressing remaining performance and maintenance issues.

  3. Implementation of BES-III TOF trigger system in programmable logic devices

    International Nuclear Information System (INIS)

    Zheng Wei; Liu Shubin; Liu Xuzong; An Qi

    2009-01-01

    The TOF trigger sub-system on the upgrading Beijing Spectrometer is designed to receive 368 bits fast hit signals from the front end electronics module to yield 7 bits trigger information according to the physical requirement. It sends the processed real time trigger information to the Global-Trigger-Logic to generate the primal trigger signal L1, and sends processed 136 bits real time position information to the Track-Match-Logic to calculate the particle flight tracks. The sub-system also packages the valid events for the DAQ system to read out. Following the reconfigurable concept, a large number of programmable logic devices are employed to increase the flexibility and reliability of the system, and decrease the complexity and the space requirement of PCB layout. This paper describes the implementation of the kernel trigger logic in a programmable logic device. (authors)

  4. TRIGGER

    CERN Multimedia

    Wesley Smith

    Level-1 Trigger Hardware and Software The hardware of the trigger components has been mostly finished. The ECAL Endcap Trigger Concentrator Cards (TCC) are in production while Barrel TCC firmware has been upgraded, and the Trigger Primitives can now be stored by the Data Concentrator Card for readout by the DAQ. The Regional Calorimeter Trigger (RCT) system is complete, and the timing is being finalized. All 502 HCAL trigger links to RCT run without error. The HCAL muon trigger timing has been equalized with DT, RPC, CSC and ECAL. The hardware and firmware for the Global Calorimeter Trigger (GCT) jet triggers are being commissioned and data from these triggers is available for readout. The GCT energy sums from rings of trigger towers around the beam pipe beam have been changed to include two rings from both sides. The firmware for Drift Tube Track Finder, Barrel Sorter and Wedge Sorter has been upgraded, and the synchronization of the DT trigger is satisfactory. The CSC local trigger has operated flawlessly u...

  5. Delivering high performance BWR fuel reliably

    Energy Technology Data Exchange (ETDEWEB)

    Schardt, J.F. [GE Nuclear Energy, Wilmington, NC (United States)

    1998-07-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  6. A new Highly Selective First Level ATLAS Muon Trigger With MDT Chamber Data for HL-LHC

    CERN Document Server

    Nowak, Sebastian; The ATLAS collaboration

    2015-01-01

    Highly selective first level triggers are essential for the physics programme of the ATLAS experiment at the HL-LHC where the instantaneous luminosity will exceed the LHC's instantaneous luminosity by almost an order of magnitude. The ATLAS first level muon trigger rate is dominated by low momentum sub-trigger threshold muons due to the poor momentum resolution at trigger level caused by the moderate spatial resolution of the resistive plate and thin gap trigger chambers. This limitation can be overcome by including the data of the precision muon drift tube chambers in the first level trigger decision. This requires the implementation of a fast MDT read-out chain and a fast MDT track reconstruction. A hardware demonstrator of the fast read-out chain was successfully tested under HL-LHC operating conditions at CERN's Gamma Irradiation Facility. It could be shown that the data provided by the demonstrator can be processed with a fast track reconstruction algorithm on an ARM CPU within the 6 microseconds latency...

  7. High School Dropout in Proximal Context: The Triggering Role of Stressful Life Events

    Science.gov (United States)

    Dupéré, Véronique; Dion, Eric; Leventhal, Tama; Archambault, Isabelle; Crosnoe, Robert; Janosz, Michel

    2018-01-01

    Adolescents who drop out of high school experience enduring negative consequences across many domains. Yet, the circumstances triggering their departure are poorly understood. This study examined the precipitating role of recent psychosocial stressors by comparing three groups of Canadian high school students (52% boys; M[subscript…

  8. Towards a Level-1 tracking trigger for the ATLAS experiment at the High Luminosity LHC

    CERN Document Server

    Martin, T A D; The ATLAS collaboration

    2014-01-01

    At the high luminosity HL-LHC, upwards of 160 individual proton-proton interactions (pileup) are expected per bunch-crossing at luminosities of around $5\\times10^{34}$ cm$^{-2}$s$^{-1}$. A proposal by the ATLAS collaboration to split the ATLAS first level trigger in to two stages is briefly detailed. The use of fast track finding in the new first level trigger is explored as a method to provide the discrimination required to reduce the event rate to acceptable levels for the read out system while maintaining high efficiency on the selection of the decay products of electroweak bosons at HL-LHC luminosities. It is shown that available bandwidth in the proposed new strip tracker is sufficiency for a region of interest based track trigger given certain optimisations, further methods for improving upon the proposal are discussed.

  9. The ATLAS High Level Trigger Infrastructure, Performance and Future Developments

    CERN Document Server

    The ATLAS collaboration

    2009-01-01

    The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage event filter running on a farm of commodity PC hardware. Currently the system consists of about 850 multi-core processing nodes that will be extended incrementally following the increasing luminosity of the LHC to about 2000 nodes depending on the evolution of the processor technology. Due to the complexity and similarity of the algorithms a large fraction of the software is shared between the online and offline event reconstruction. The HLT Infrastructure serves as the interface between the two domains and provides common services for the trigger algorithms. The consequences of this design choice will be discussed and experiences from the operation of the ATLAS HLT during cosmic ray data taking and first beam in 2008 will be presented. Since the event processing time at the HL...

  10. Real-time configuration changes of the ATLAS High Level Trigger

    CERN Document Server

    Winklmeier, F

    2010-01-01

    The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage trigger and event filter running on a farm of commodity PC hardware. Currently the system consists of about 850 processing nodes and will be extended incrementally following the expected increase in luminosity of the LHC to about 2000 nodes. The event selection within the HLT applications is carried out by specialized reconstruction algorithms. The selection can be controlled via properties that are stored in a central database and are retrieved at the startup of the HLT processes, which then usually run continuously for many hours. To be able to respond to changes in the LHC beam conditions, it is essential that the algorithms can be re-configured without disrupting data taking while ensuring a consistent and reproducible configuration across the entire HLT farm. The technique...

  11. Uv laser triggering of high-voltage gas switches

    International Nuclear Information System (INIS)

    Woodworth, J.R.; Frost, C.A.; Green, T.A.

    1982-01-01

    Two different techniques are discussed for uv laser triggering of high-voltage gas switches using a KrF laser (248 nm) to create an ionized channel through the dielectric gas in a spark gap. One technique uses an uv laser to induce breakdown in SF 6 . For this technique, we present data that demonstrate a 1-sigma jitter of +- 150 ps for a 0.5-MV switch at 80% of its self-breakdown voltage using a low-divergence KrF laser. The other scheme uses additives to the normal dielectric gas, such as tripropylamine, which are selected to undergo resonant two-step ionization in the uv laser field

  12. Custom high-reliability radiation-hard CMOS-LSI circuit design

    International Nuclear Information System (INIS)

    Barnard, W.J.

    1981-01-01

    Sandia has developed a custom CMOS-LSI design capability to provide high reliability radiation-hardened circuits. This capability relies on (1) proven design practices to enhance reliability, (2) use of well characterized cells and logic modules, (3) computer-aided design tools to reduce design time and errors and to standardize design definition, and (4) close working relationships with the system designer and technology fabrication personnel. Trade-offs are made during the design between circuit complexity/performance and technology/producibility for high reliability and radiation-hardened designs to result. Sandia has developed and is maintaining a radiation-hardened bulk CMOS technology fabrication line for production of prototype and small production volume parts

  13. The Trigger Processor and Trigger Processor Algorithms for the ATLAS New Small Wheel Upgrade

    CERN Document Server

    Lazovich, Tomo; The ATLAS collaboration

    2015-01-01

    The ATLAS New Small Wheel (NSW) is an upgrade to the ATLAS muon endcap detectors that will be installed during the next long shutdown of the LHC. Comprising both MicroMegas (MMs) and small-strip Thin Gap Chambers (sTGCs), this system will drastically improve the performance of the muon system in a high cavern background environment. The NSW trigger, in particular, will significantly reduce the rate of fake triggers coming from track segments in the endcap not originating from the interaction point. We will present an overview of the trigger, the proposed sTGC and MM trigger algorithms, and the hardware implementation of the trigger. In particular, we will discuss both the heart of the trigger, an ATCA system with FPGA-based trigger processors (using the same hardware platform for both MM and sTGC triggers), as well as the full trigger electronics chain, including dedicated cards for transmission of data via GBT optical links. Finally, we will detail the challenges of ensuring that the trigger electronics can ...

  14. ELM triggering by energetic particle driven mode in wall-stabilized high-β plasmas

    International Nuclear Information System (INIS)

    Matsunaga, G.; Aiba, N.; Shinohara, K.; Asakura, N.; Isayama, A.; Oyama, N.

    2013-01-01

    In the JT-60U high-β plasmas above the no-wall β limit, a triggering of an edge localized mode (ELM) by an energetic particle (EP)-driven mode has been observed. This EP-driven mode is thought to be driven by trapped EPs and it has been named EP-driven wall mode (EWM) on JT-60U (Matsunaga et al 2009 Phys. Rev. Lett. 103 045001). When the EWM appears in an ELMy H-mode phase, ELM crashes are reproducibly synchronized with the EWM bursts. The EWM-triggered ELM has a higher repetition frequency and less energy loss than those of the natural ELM. In order to trigger an ELM by the EP-driven mode, some conditions are thought to be needed, thus an EWM with large amplitude and growth rate, and marginal edge stability. In the scrape-off layer region, several measurements indicate an ion loss induced by the EWM. The ion transport is considered as the EP transport through the edge region. From these observations, the EP contributions to edge stability are discussed as one of the ELM triggering mechanisms. (paper)

  15. Energy/Reliability Trade-offs in Fault-Tolerant Event-Triggered Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Gan, Junhe; Gruian, Flavius; Pop, Paul

    2011-01-01

    task, such that transient faults are tolerated, the timing constraints of the application are satisfied, and the energy consumed is minimized. Tasks are scheduled using fixed-priority preemptive scheduling, while replication is used for recovery from multiple transient faults. Addressing energy...... and reliability simultaneously is especially challenging, since lowering the voltage to reduce the energy consumption has been shown to increase the transient fault rate. We presented a Tabu Search-based approach which uses an energy/reliability trade-off model to find reliable and schedulable implementations...

  16. Pulsed laser triggered high speed microfluidic fluorescence activated cell sorter†‡

    Science.gov (United States)

    Wu, Ting-Hsiang; Chen, Yue; Park, Sung-Yong; Hong, Jason; Teslaa, Tara; Zhong, Jiang F.; Di Carlo, Dino; Teitell, Michael A.

    2014-01-01

    We report a high speed and high purity pulsed laser triggered fluorescence activated cell sorter (PLACS) with a sorting throughput up to 20 000 mammalian cells s−1 with 37% sorting purity, 90% cell viability in enrichment mode, and >90% purity in high purity mode at 1500 cells s−1 or 3000 beads s−1. Fast switching (30 μs) and a small perturbation volume (~90 pL) is achieved by a unique sorting mechanism in which explosive vapor bubbles are generated using focused laser pulses in a single layer microfluidic PDMS channel. PMID:22361780

  17. Operational experience with the ALICE High Level Trigger

    Science.gov (United States)

    Szostak, Artur

    2012-12-01

    The ALICE HLT is a dedicated real-time system for online event reconstruction and triggering. Its main goal is to reduce the raw data volume read from the detectors by an order of magnitude, to fit within the available data acquisition bandwidth. This is accomplished by a combination of data compression and triggering. When HLT is enabled, data is recorded only for events selected by HLT. The combination of both approaches allows for flexible data reduction strategies. Event reconstruction places a high computational load on HLT. Thus, a large dedicated computing cluster is required, comprising 248 machines, all interconnected with InfiniBand. Running a large system like HLT in production mode proves to be a challenge. During the 2010 pp and Pb-Pb data-taking period, many problems were experienced that led to a sub-optimal operational efficiency. Lessons were learned and certain crucial changes were made to the architecture and software in preparation for the 2011 Pb-Pb run, in which HLT had a vital role performing data compression for ALICE's largest detector, the TPC. An overview of the status of the HLT and experience from the 2010/2011 production runs are presented. Emphasis is given to the overall performance, showing an improved efficiency and stability in 2011 compared to 2010, attributed to the significant improvements made to the system. Further opportunities for improvement are identified and discussed.

  18. Operational experience with the ALICE High Level Trigger

    International Nuclear Information System (INIS)

    Szostak, Artur

    2012-01-01

    The ALICE HLT is a dedicated real-time system for online event reconstruction and triggering. Its main goal is to reduce the raw data volume read from the detectors by an order of magnitude, to fit within the available data acquisition bandwidth. This is accomplished by a combination of data compression and triggering. When HLT is enabled, data is recorded only for events selected by HLT. The combination of both approaches allows for flexible data reduction strategies. Event reconstruction places a high computational load on HLT. Thus, a large dedicated computing cluster is required, comprising 248 machines, all interconnected with InfiniBand. Running a large system like HLT in production mode proves to be a challenge. During the 2010 pp and Pb-Pb data-taking period, many problems were experienced that led to a sub-optimal operational efficiency. Lessons were learned and certain crucial changes were made to the architecture and software in preparation for the 2011 Pb-Pb run, in which HLT had a vital role performing data compression for ALICE's largest detector, the TPC. An overview of the status of the HLT and experience from the 2010/2011 production runs are presented. Emphasis is given to the overall performance, showing an improved efficiency and stability in 2011 compared to 2010, attributed to the significant improvements made to the system. Further opportunities for improvement are identified and discussed.

  19. High Reliability Oscillators for Terahertz Systems, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — To develop reliable THz sources with high power and high DC-RF efficiency, Virginia Diodes, Inc. will develop a thorough understanding of the complex interactions...

  20. TRIGGER

    CERN Multimedia

    Wesley Smith

    Level-1 Trigger Hardware and Software The trigger synchronization procedures for running with cosmic muons and operating with the LHC were reviewed during the May electronics week. Firmware maintenance issues were also reviewed. Link tests between the new ECAL endcap trigger concentrator cards (TCC48) and the Regional Calorimeter Trigger have been performed. Firmware for the energy sum triggers and an upgraded tau trigger of the Global Calorimeter Triggers has been developed and is under test. The optical fiber receiver boards for the Track-Finder trigger theta links of the DT chambers are now all installed. The RPC trigger is being made more robust by additional chamber and cable shielding and also by firmware upgrades. For the CSC’s the front-end and trigger motherboard firmware have been updated. New RPC patterns and DT/CSC lookup tables taking into account phi asymmetries in the magnetic field configuration are under study. The motherboard for the new pipeline synchronizer of the Global Trigg...

  1. A multi-purpose open-source triggering platform for magnetic resonance

    Science.gov (United States)

    Ruytenberg, T.; Webb, A. G.; Beenakker, J. W. M.

    2014-10-01

    Many MR scans need to be synchronised with external events such as the cardiac or respiratory cycles. For common physiological functions commercial trigger equipment exists, but for more experimental inputs these are not available. This paper describes the design of a multi-purpose open-source trigger platform for MR systems. The heart of the system is an open-source Arduino Due microcontroller. This microcontroller samples an analogue input and digitally processes these data to determine the trigger. The output of the microcontroller is programmed to mimic a physiological signal which is fed into the electrocardiogram (ECG) or pulse oximeter port of MR scanner. The microcontroller is connected to a Bluetooth dongle that allows wireless monitoring and control outside the scanner room. This device can be programmed to generate a trigger based on various types of input. As one example, this paper describes how it can be used as an acoustic cardiac triggering unit. For this, a plastic stethoscope is connected to a microphone which is used as an input for the system. This test setup was used to acquire retrospectively-triggered cardiac scans in ten volunteers. Analysis showed that this platform produces a reliable trigger (>99% triggers are correct) with a small average 8 ms variation between the exact trigger points.

  2. Assessment of microelectronics packaging for high temperature, high reliability applications

    Energy Technology Data Exchange (ETDEWEB)

    Uribe, F.

    1997-04-01

    This report details characterization and development activities in electronic packaging for high temperature applications. This project was conducted through a Department of Energy sponsored Cooperative Research and Development Agreement between Sandia National Laboratories and General Motors. Even though the target application of this collaborative effort is an automotive electronic throttle control system which would be located in the engine compartment, results of this work are directly applicable to Sandia`s national security mission. The component count associated with the throttle control dictates the use of high density packaging not offered by conventional surface mount. An enabling packaging technology was selected and thermal models defined which characterized the thermal and mechanical response of the throttle control module. These models were used to optimize thick film multichip module design, characterize the thermal signatures of the electronic components inside the module, and to determine the temperature field and resulting thermal stresses under conditions that may be encountered during the operational life of the throttle control module. Because the need to use unpackaged devices limits the level of testing that can be performed either at the wafer level or as individual dice, an approach to assure a high level of reliability of the unpackaged components was formulated. Component assembly and interconnect technologies were also evaluated and characterized for high temperature applications. Electrical, mechanical and chemical characterizations of enabling die and component attach technologies were performed. Additionally, studies were conducted to assess the performance and reliability of gold and aluminum wire bonding to thick film conductor inks. Kinetic models were developed and validated to estimate wire bond reliability.

  3. Analysis of fatigue reliability for high temperature and high pressure multi-stage decompression control valve

    Science.gov (United States)

    Yu, Long; Xu, Juanjuan; Zhang, Lifang; Xu, Xiaogang

    2018-03-01

    Based on stress-strength interference theory to establish the reliability mathematical model for high temperature and high pressure multi-stage decompression control valve (HMDCV), and introduced to the temperature correction coefficient for revising material fatigue limit at high temperature. Reliability of key dangerous components and fatigue sensitivity curve of each component are calculated and analyzed by the means, which are analyzed the fatigue life of control valve and combined with reliability theory of control valve model. The impact proportion of each component on the control valve system fatigue failure was obtained. The results is shown that temperature correction factor makes the theoretical calculations of reliability more accurate, prediction life expectancy of main pressure parts accords with the technical requirements, and valve body and the sleeve have obvious influence on control system reliability, the stress concentration in key part of control valve can be reduced in the design process by improving structure.

  4. The ATLAS High Level Trigger Configuration and Steering, Experience with the First 7 TeV Collisions

    CERN Document Server

    Stelzer, J; The ATLAS collaboration

    2011-01-01

    In March 2010 the four LHC experiments saw the first proton-proton collisions at a center-of-mass energy of 7 TeV. Still within the year a collision rate of nearly 10 MHz was expected. At ATLAS, events of potential physics interest for are selected by a three-level trigger system, with a final recording rate of about 200 Hz. The first level (L1) is implemented in customized hardware, the two levels of the high level trigger (HLT) are software triggers. For the ATLAS physics program more than 500 trigger signatures are defined. The HLT tests each signature on each L1-accepted event, the test outcome is recorded for later analysis. The HLT-Steering is responsible for this. It foremost ensures the independence of each signature test and an unbiased trigger decisions. Yet, to minimize data readout and execution time, cached detector data and once-calculated trigger objects are reused to form the decision. Some signature tests are performed only on a scaled-down fraction of candidate events, in order to reduce the...

  5. Topological trigger device using scintillating fibres and position-sensitive photomultipliers

    CERN Document Server

    Agoritsas, V; Dufournaud, J; Giacomich, R; Gorin, A M; Kuroda, K; Meshchanin, A P; Newsom, C R; Nurushev, S B; Önel, Y M; Oshima, N; Pauletta, G; Penzo, Aldo L; Rakhmatov, V E; Rykalin, V I; Salvato, G; Schiavon, R P; Sillou, D; Solovyanov, V L; Takeutchi, F; Vasilev, V; Vasilchenko, V G; Villari, A C C; Yamada, R; Toshida, T; CERN. Geneva. Detector Research and Development Committee

    1990-01-01

    An approach to a high-quality level-1 trigger is proposed on the basis of a topological device that will be realized by using scintillating fibres and position-sensitive photomultipliers, both of which are considered as potential candidates for new detector components, thanks to their excellent time characteristics and high radiation resistance. The device is characterized, in particular, by its simple concept and reliable functioning, which are a result of the mature technologies employed. In the LHC environment, the major interests of such a scheme reside in its capability to select high ptransv. tracks in real time, in its optional immunity against low ptransv. tracks and loopers, as well as in its effective links to other associated devices within the complex of a vertex detector.

  6. Site-specific to local-scale shallow landslides triggering zones assessment using TRIGRS

    Science.gov (United States)

    Bordoni, M.; Meisina, C.; Valentino, R.; Bittelli, M.; Chersich, S.

    2015-05-01

    Rainfall-induced shallow landslides are common phenomena in many parts of the world, affecting cultivation and infrastructure and sometimes causing human losses. Assessing the triggering zones of shallow landslides is fundamental for land planning at different scales. This work defines a reliable methodology to extend a slope stability analysis from the site-specific to local scale by using a well-established physically based model (TRIGRS-unsaturated). The model is initially applied to a sample slope and then to the surrounding 13.4 km2 area in Oltrepo Pavese (northern Italy). To obtain more reliable input data for the model, long-term hydro-meteorological monitoring has been carried out at the sample slope, which has been assumed to be representative of the study area. Field measurements identified the triggering mechanism of shallow failures and were used to verify the reliability of the model to obtain pore water pressure trends consistent with those measured during the monitoring activity. In this way, more reliable trends have been modelled for past landslide events, such as the April 2009 event that was assumed as a benchmark. The assessment of shallow landslide triggering zones obtained using TRIGRS-unsaturated for the benchmark event appears good for both the monitored slope and the whole study area, with better results when a pedological instead of geological zoning is considered at the regional scale. The sensitivity analyses of the influence of the soil input data show that the mean values of the soil properties give the best results in terms of the ratio between the true positive and false positive rates. The scheme followed in this work allows us to obtain better results in the assessment of shallow landslide triggering areas in terms of the reduction in the overestimation of unstable zones with respect to other distributed models applied in the past.

  7. Adjacent Vehicle Number-Triggered Adaptive Transmission for V2V Communications.

    Science.gov (United States)

    Wei, Yiqiao; Chen, Jingjun; Hwang, Seung-Hoon

    2018-03-02

    For vehicle-to-vehicle (V2V) communication, such issues as continuity and reliability still have to be solved. Specifically, it is necessary to consider a more scalable physical layer due to the high-speed mobility of vehicles and the complex channel environment. Adaptive transmission has been adapted in channel-dependent scheduling. However, it has been neglected with regards to the physical topology changes in the vehicle network. In this paper, we propose a physical topology-triggered adaptive transmission scheme which adjusts the data rate between vehicles according to the number of connectable vehicles nearby. Also, we investigate the performance of the proposed method using computer simulations and compare it with the conventional methods. The numerical results show that the proposed method can provide more continuous and reliable data transmission for V2V communications.

  8. TRIGGER

    CERN Multimedia

    Wesley Smith

    Level-1 Trigger Hardware and Software The production of the trigger hardware is now basically finished, and in time for the turn-on of the LHC. The last boards produced are the Trigger Concentrator Cards for the ECAL Endcaps (TCC-EE). After the recent installation of the four EE Dees, the TCC-EE prototypes were used for their commissioning. Production boards are arriving and are being tested continuously, with the last ones expected in November. The Regional Calorimeter Trigger hardware is fully integrated after installation of the last EE cables. Pattern tests from the HCAL up to the GCT have been performed successfully. The HCAL triggers are fully operational, including the connection of the HCAL-outer and forward-HCAL (HO/HF) technical triggers to the Global Trigger. The HCAL Trigger and Readout (HTR) board firmware has been updated to permit recording of the tower “feature bit” in the data. The Global Calorimeter Trigger hardware is installed, but some firmware developments are still n...

  9. The design of a fast Level-1 track trigger for the high luminosity upgrade of ATLAS.

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00413032; The ATLAS collaboration

    2016-01-01

    The high/luminosity upgrade of the LHC will increase the rate of the proton-proton collisions by approximately a factor of 5 with respect to the initial LHC-design. The ATLAS experiment will upgrade consequently, increasing its robustness and selectivity in the expected high radiation environment. In particular, the earliest, hardware based, ATLAS trigger stage ("Level 1") will require higher rejection power, still maintaining efficient selection on many various physics signatures. The key ingredient is the possibility of extracting tracking information from the brand new full-silicon detector and use it for the process. While fascinating, this solution poses a big challenge in the choice of the architecture, due to the reduced latency available at this trigger level (few tens of micro-seconds) and the high expected working rates (order of MHz). In this paper, we review the design possibilities of such a system in a potential new trigger and readout architecture, and present the performance resulting from a d...

  10. Very low pressure high power impulse triggered magnetron sputtering

    Science.gov (United States)

    Anders, Andre; Andersson, Joakim

    2013-10-29

    A method and apparatus are described for very low pressure high powered magnetron sputtering of a coating onto a substrate. By the method of this invention, both substrate and coating target material are placed into an evacuable chamber, and the chamber pumped to vacuum. Thereafter a series of high impulse voltage pulses are applied to the target. Nearly simultaneously with each pulse, in one embodiment, a small cathodic arc source of the same material as the target is pulsed, triggering a plasma plume proximate to the surface of the target to thereby initiate the magnetron sputtering process. In another embodiment the plasma plume is generated using a pulsed laser aimed to strike an ablation target material positioned near the magnetron target surface.

  11. Memorial Hermann: high reliability from board to bedside.

    Science.gov (United States)

    Shabot, M Michael; Monroe, Douglas; Inurria, Juan; Garbade, Debbi; France, Anne-Claire

    2013-06-01

    In 2006 the Memorial Hermann Health System (MHHS), which includes 12 hospitals, began applying principles embraced by high reliability organizations (HROs). Three factors support its HRO journey: (1) aligned organizational structure with transparent management systems and compressed reporting processes; (2) Robust Process Improvement (RPI) with high-reliability interventions; and (3) cultural establishment, sustainment, and evolution. The Quality and Safety strategic plan contains three domains, each with a specific set of measures that provide goals for performance: (1) "Clinical Excellence;" (2) "Do No Harm;" and (3) "Saving Lives," as measured by the Serious Safety Event rate. MHHS uses a uniform approach to performance improvement--RPI, which includes Six Sigma, Lean, and change management, to solve difficult safety and quality problems. The 9 acute care hospitals provide multiple opportunities to integrate high-reliability interventions and best practices across MHHS. For example, MHHS partnered with the Joint Commission Center for Transforming Healthcare in its inaugural project to establish reliable hand hygiene behaviors, which improved MHHS's average hand hygiene compliance rate from 44% to 92% currently. Soon after compliance exceeded 85% at all 12 hospitals, the average rate of central line-associated bloodstream and ventilator-associated pneumonias decreased to essentially zero. MHHS's size and diversity require a disciplined approach to performance improvement and systemwide achievement of measurable success. The most significant cultural change at MHHS has been the expectation for 100% compliance with evidence-based quality measures and 0% incidence of patient harm.

  12. Development of a highly selective muon trigger exploiting the high spatial resolution of monitored drift-tube chambers for the ATLAS experiment at the HL-LHC

    CERN Document Server

    Kortner, Oliver; The ATLAS collaboration

    2018-01-01

    The High-Luminosity LHC will provide the unique opportunity to explore the nature of physics beyond the Standard Model. Highly selective first level triggers are essential for the physics programme of the ATLAS experiment at the HL-LHC, where the instantaneous luminosity will exceed the LHC design instantaneous luminosity by almost an order of magnitude. The ATLAS first level muon trigger rate is dominated by low momentum muons, selected due to the moderate momentum resolution of the current system. This first level trigger limitation can be overcome by including data from the precision muon drift tube (MDT) chambers. This requires the fast continuous transfer of the MDT hits to the off-detector trigger logic and a fast track reconstruction algorithm performed in the trigger logic. The feasibility of this approach was studied with LHC collision data and simulated data. Two main options for the hardware implementation will be studied with demonstrators: an FPGA based option with an embedded ARM microprocessor ...

  13. Development of a Highly Selective Muon Trigger Exploiting the High Spatial Resolution of Monitored Drift-Tube Chambers for the ATLAS Experiment at the HL-LHC

    CERN Document Server

    Kortner, Oliver; The ATLAS collaboration

    2018-01-01

    The High-Luminosity LHC will provide the unique opportunity to explore the nature of physics beyond the Standard Model. Highly selective first level triggers are essential for the physics programme of the ATLAS experiment at the HL-LHC, where the instantaneous luminosity will exceed the LHC design instantaneous luminosity by almost an order of magnitude. The ATLAS first level muon trigger rate is dominated by low momentum muons, selected due to the moderate momentum resolution of the current system. This first level trigger limitation can be overcome by including data from the precision muon drift tube (MDT) chambers. This requires the fast continuous transfer of the MDT hits to the off-detector trigger logic and a fast track reconstruction algorithm performed in the trigger logic. The feasibility of this approach was studied with LHC collision data and simulated data. Two main options for the hardware implementation are currently studied with demonstrators, an FPGA based option with an embedded ARM microproc...

  14. A new kind high-reliability digital reactivity meter

    International Nuclear Information System (INIS)

    Shen Feng; Jiang Zongbing

    2001-01-01

    The paper introduces a new kind of high-reliability Digital Reactivity Meter developed by the DRM developing group in designing department of Nuclear Power Institute of China. The meter has two independent measure channels, which can be set as either master-slave structure or working independently. This structure will ensure that the meter can continually fulfill its online measure task under the condition of single failure with it. It provides a solution for the conflict between nuclear station's extreme demand in DRM's reliability and instability of computer's business software platform. The instrument reaches both advance and reliability by covering a lot of kinds of complex functions in data process and display

  15. The ATLAS Level-1 Calorimeter Trigger

    International Nuclear Information System (INIS)

    Achenbach, R; Andrei, V; Adragna, P; Apostologlou, P; Barnett, B M; Brawn, I P; Davis, A O; Edwards, J P; Asman, B; Bohm, C; Ay, C; Bauss, B; Bendel, M; Dahlhoff, A; Eckweiler, S; Booth, J R A; Thomas, P Bright; Charlton, D G; Collins, N J; Curtis, C J

    2008-01-01

    The ATLAS Level-1 Calorimeter Trigger uses reduced-granularity information from all the ATLAS calorimeters to search for high transverse-energy electrons, photons, τ leptons and jets, as well as high missing and total transverse energy. The calorimeter trigger electronics has a fixed latency of about 1 μs, using programmable custom-built digital electronics. This paper describes the Calorimeter Trigger hardware, as installed in the ATLAS electronics cavern

  16. The ATLAS Level-1 Calorimeter Trigger

    Energy Technology Data Exchange (ETDEWEB)

    Achenbach, R; Andrei, V [Kirchhoff-Institut fuer Physik, University of Heidelberg, D-69120 Heidelberg (Germany); Adragna, P [Physics Department, Queen Mary, University of London, London E1 4NS (United Kingdom); Apostologlou, P; Barnett, B M; Brawn, I P; Davis, A O; Edwards, J P [STFC Rutherford Appleton Laboratory, Harwell Science and Innovation Campus, Didcot, Oxon OX11 0QX (United Kingdom); Asman, B; Bohm, C [Fysikum, Stockholm University, SE-106 91 Stockholm (Sweden); Ay, C; Bauss, B; Bendel, M; Dahlhoff, A; Eckweiler, S [Institut fuer Physik, University of Mainz, D-55099 Mainz (Germany); Booth, J R A; Thomas, P Bright; Charlton, D G; Collins, N J; Curtis, C J [School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT (United Kingdom)], E-mail: e.eisenhandler@qmul.ac.uk (and others)

    2008-03-15

    The ATLAS Level-1 Calorimeter Trigger uses reduced-granularity information from all the ATLAS calorimeters to search for high transverse-energy electrons, photons, {tau} leptons and jets, as well as high missing and total transverse energy. The calorimeter trigger electronics has a fixed latency of about 1 {mu}s, using programmable custom-built digital electronics. This paper describes the Calorimeter Trigger hardware, as installed in the ATLAS electronics cavern.

  17. Prototype of a file-based high-level trigger in CMS

    International Nuclear Information System (INIS)

    Bauer, G; Darlea, G-L; Gomez-Ceballos, G; Bawej, T; Chaze, O; Coarasa, J A; Deldicque, C; Dobson, M; Dupont, A; Gigi, D; Glege, F; Gomez-Reino, R; Hartl, C; Hegeman, J; Masetti, L; Behrens, U; Branson, J; Cittolin, S; Holzner, A; Erhan, S

    2014-01-01

    The DAQ system of the CMS experiment at the LHC is upgraded during the accelerator shutdown in 2013/14. To reduce the interdependency of the DAQ system and the high-level trigger (HLT), we investigate the feasibility of using a file-system-based HLT. Events of ∼1 MB size are built at the level-1 trigger rate of 100 kHz. The events are assembled by ∼50 builder units (BUs). Each BU writes the raw events at ∼2GB/s to a local file system shared with Q(10) filter-unit machines (FUs) running the HLT code. The FUs read the raw data from the file system, select Q(1%) of the events, and write the selected events together with monitoring meta-data back to a disk. This data is then aggregated over several steps and made available for offline reconstruction and online monitoring. We present the challenges, technical choices, and performance figures from the prototyping phase. In addition, the steps to the final system implementation will be discussed.

  18. New methods to engineer and seamlessly reconfigure time triggered ethernet based systems during runtime based on the PROFINET IRT example

    CERN Document Server

    Wisniewski, Lukasz

    2017-01-01

    The objective of this dissertation is to design a concept that would allow to increase the flexibility of currently available Time Triggered Ethernet based (TTEB) systems, however, without affecting their performance and robustness. The main challenges are related to scheduling of time triggered communication that may take significant amount of time and has to be performed on a powerful platform. Additionally, the reliability has to be considered and kept on the required high level. Finally, the reconfiguration has to be optimally done without affecting the currently running system.

  19. Consistent high clinical pregnancy rates and low ovarian hyperstimulation syndrome rates in high-risk patients after GnRH agonist triggering and modified luteal support

    DEFF Research Database (Denmark)

    Iliodromiti, Stamatina; Blockeel, Christophe; Tremellen, Kelton P

    2013-01-01

    Are clinical pregnancy rates satisfactory and the incidence of OHSS low after GnRH agonist trigger and modified intensive luteal support in patients with a high risk of ovarian hyperstimulation syndrome (OHSS)?......Are clinical pregnancy rates satisfactory and the incidence of OHSS low after GnRH agonist trigger and modified intensive luteal support in patients with a high risk of ovarian hyperstimulation syndrome (OHSS)?...

  20. A multi-purpose open-source triggering platform for magnetic resonance.

    Science.gov (United States)

    Ruytenberg, T; Webb, A G; Beenakker, J W M

    2014-10-01

    Many MR scans need to be synchronised with external events such as the cardiac or respiratory cycles. For common physiological functions commercial trigger equipment exists, but for more experimental inputs these are not available. This paper describes the design of a multi-purpose open-source trigger platform for MR systems. The heart of the system is an open-source Arduino Due microcontroller. This microcontroller samples an analogue input and digitally processes these data to determine the trigger. The output of the microcontroller is programmed to mimic a physiological signal which is fed into the electrocardiogram (ECG) or pulse oximeter port of MR scanner. The microcontroller is connected to a Bluetooth dongle that allows wireless monitoring and control outside the scanner room. This device can be programmed to generate a trigger based on various types of input. As one example, this paper describes how it can be used as an acoustic cardiac triggering unit. For this, a plastic stethoscope is connected to a microphone which is used as an input for the system. This test setup was used to acquire retrospectively-triggered cardiac scans in ten volunteers. Analysis showed that this platform produces a reliable trigger (>99% triggers are correct) with a small average 8 ms variation between the exact trigger points. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Performance of a First-Level Muon Trigger with High Momentum Resolution Based on the ATLAS MDT Chambers for HL-LHC

    CERN Document Server

    Gadow, P.; Kortner, S.; Kroha, H.; Müller, F.; Richter, R.

    2016-01-01

    Highly selective first-level triggers are essential to exploit the full physics potential of the ATLAS experiment at High-Luminosity LHC (HL-LHC). The concept for a new muon trigger stage using the precision monitored drift tube (MDT) chambers to significantly improve the selectivity of the first-level muon trigger is presented. It is based on fast track reconstruction in all three layers of the existing MDT chambers, made possible by an extension of the first-level trigger latency to six microseconds and a new MDT read-out electronics required for the higher overall trigger rates at the HL-LHC. Data from $pp$-collisions at $\\sqrt{s} = 8\\,\\mathrm{TeV}$ is used to study the minimal muon transverse momentum resolution that can be obtained using the MDT precision chambers, and to estimate the resolution and efficiency of the MDT-based trigger. A resolution of better than $4.1\\%$ is found in all sectors under study. With this resolution, a first-level trigger with a threshold of $18\\,\\mathrm{GeV}$ becomes fully e...

  2. FPGA Co-processor for the ALICE High Level Trigger

    CERN Document Server

    Grastveit, G.; Lindenstruth, V.; Loizides, C.; Roehrich, D.; Skaali, B.; Steinbeck, T.; Stock, R.; Tilsner, H.; Ullaland, K.; Vestbo, A.; Vik, T.

    2003-01-01

    The High Level Trigger (HLT) of the ALICE experiment requires massive parallel computing. One of the main tasks of the HLT system is two-dimensional cluster finding on raw data of the Time Projection Chamber (TPC), which is the main data source of ALICE. To reduce the number of computing nodes needed in the HLT farm, FPGAs, which are an intrinsic part of the system, will be utilized for this task. VHDL code implementing the Fast Cluster Finder algorithm, has been written, a testbed for functional verification of the code has been developed, and the code has been synthesized

  3. The ATLAS Electron and Photon Trigger

    CERN Document Server

    Jones, Samuel David; The ATLAS collaboration

    2017-01-01

    Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for signal selection in a wide variety of ATLAS physics analyses to study Standard Model processes and to search for new phenomena. Final states including leptons and photons had, for example, an important role in the discovery and measurement of the Higgs boson. Dedicated triggers are also used to collect data for calibration, efficiency and fake rate measurements. The ATLAS trigger system is divided in a hardware-based Level-1 trigger and a software-based high-level trigger, both of which were upgraded during the LHC shutdown in preparation for Run-2 operation. To cope with the increasing luminosity and more challenging pile-up conditions at a center-of-mass energy of 13 TeV, the trigger selections at each level are optimized to control the rates and keep efficiencies high. To achieve this goal multivariate analysis techniques are used. The ATLAS electron and photon triggers and their performance with Run 2 dat...

  4. The ATLAS Electron and Photon Trigger

    CERN Document Server

    Jones, Samuel David; The ATLAS collaboration

    2018-01-01

    Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for signal selection in a wide variety of ATLAS physics analyses to study Standard Model processes and to search for new phenomena. Final states including leptons and photons had, for example, an important role in the discovery and measurement of the Higgs boson. Dedicated triggers are also used to collect data for calibration, efficiency and fake rate measurements. The ATLAS trigger system is divided in a hardware-based Level-1 trigger and a software-based high-level trigger, both of which were upgraded during the LHC shutdown in preparation for Run-2 operation. To cope with the increasing luminosity and more challenging pile-up conditions at a center-of-mass energy of 13 TeV, the trigger selections at each level are optimized to control the rates and keep efficiencies high. To achieve this goal multivariate analysis techniques are used. The ATLAS electron and photon triggers and their performance with Run 2 dat...

  5. Error detection, handling and recovery at the High Level Trigger of the ATLAS experiment at the LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00223972; The ATLAS collaboration

    2016-01-01

    The complexity of the ATLAS High Level Trigger (HLT) requires a robust system for error detection and handling during online data-taking; it also requires an offline system for the recovery of events where no trigger decision could be made online. The error detection and handling ensure smooth operation of the trigger system and provide debugging information necessary for offline analysis and diagnosis. In this presentation, we give an overview of the error detection, handling and recovery of problematic events at the HLT of ATLAS.

  6. GPUs for real-time processing in HEP trigger systems (CHEP2013: 20. international conference on computing in high energy and nuclear physics)

    Energy Technology Data Exchange (ETDEWEB)

    Lamanna, G; Lamanna, G; Piandani, R [INFN, Pisa (Italy); Ammendola, R [INFN, Rome " Tor Vergata" (Italy); Bauce, M; Giagu, S; Messina, A [University, Rome " Sapienza" (Italy); Biagioni, A; Lonardo, A; Paolucci, P S; Rescigno, M; Simula, F; Vicini, P [INFN, Rome " Sapienza" (Italy); Fantechi, R [CERN, Geneve (Switzerland); Fiorini, M [University and INFN, Ferrara (Italy); Graverini, E; Pantaleo, F; Sozzi, M [University, Pisa (Italy)

    2014-06-11

    We describe a pilot project for the use of Graphics Processing Units (GPUs) for online triggering applications in High Energy Physics (HEP) experiments. Two major trends can be identified in the development of trigger and DAQ systems for HEP experiments: the massive use of general-purpose commodity systems such as commercial multicore PC farms for data acquisition, and the reduction of trigger levels implemented in hardware, towards a pure software selection system (trigger-less). The very innovative approach presented here aims at exploiting the parallel computing power of commercial GPUs to perform fast computations in software both at low- and high-level trigger stages. General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughputs, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming very attractive. We discuss in details the use of online parallel computing on GPUs for synchronous low-level trigger with fixed latency. In particular we show preliminary results on a first test in the NA62 experiment at CERN. The use of GPUs in high-level triggers is also considered, the ATLAS experiment (and in particular the muon trigger) at CERN will be taken as a study case of possible applications.

  7. GPUs for real-time processing in HEP trigger systems (CHEP2013: 20. international conference on computing in high energy and nuclear physics)

    International Nuclear Information System (INIS)

    Lamanna, G; Lamanna, G; Piandani, R; Tor Vergata (Italy))" data-affiliation=" (INFN, Rome Tor Vergata (Italy))" >Ammendola, R; Sapienza (Italy))" data-affiliation=" (University, Rome Sapienza (Italy))" >Bauce, M; Sapienza (Italy))" data-affiliation=" (University, Rome Sapienza (Italy))" >Giagu, S; Sapienza (Italy))" data-affiliation=" (University, Rome Sapienza (Italy))" >Messina, A; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Biagioni, A; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Lonardo, A; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Paolucci, P S; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Rescigno, M; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Simula, F; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Vicini, P; Fantechi, R; Fiorini, M; Graverini, E; Pantaleo, F; Sozzi, M

    2014-01-01

    We describe a pilot project for the use of Graphics Processing Units (GPUs) for online triggering applications in High Energy Physics (HEP) experiments. Two major trends can be identified in the development of trigger and DAQ systems for HEP experiments: the massive use of general-purpose commodity systems such as commercial multicore PC farms for data acquisition, and the reduction of trigger levels implemented in hardware, towards a pure software selection system (trigger-less). The very innovative approach presented here aims at exploiting the parallel computing power of commercial GPUs to perform fast computations in software both at low- and high-level trigger stages. General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughputs, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming very attractive. We discuss in details the use of online parallel computing on GPUs for synchronous low-level trigger with fixed latency. In particular we show preliminary results on a first test in the NA62 experiment at CERN. The use of GPUs in high-level triggers is also considered, the ATLAS experiment (and in particular the muon trigger) at CERN will be taken as a study case of possible applications.

  8. TRIGGER

    CERN Multimedia

    W. Smith

    2010-01-01

    Level-1 Trigger Hardware and Software The Level-1 Trigger hardware has performed well during both the recent proton-proton and heavy ion running. Efforts were made to improve the visibility and handling of alarms and warnings. The tracker ReTRI boards that prevent fixed frequencies of Level-1 Triggers are now configured through the Trigger Supervisor. The Global Calorimeter Trigger (GCT) team has introduced a buffer cleanup procedure at stops and a reset of the QPLL during configuring to ensure recalibration in case of a switch from the LHC clock to the local clock. A device to test the cables between the Regional Calorimeter Trigger and the GCT has been manufactured. A wrong charge bit was fixed in the CSC Trigger. The ECAL group is improving crystal masking and spike suppression in the trigger primitives. New firmware for the Drift Tube Track Finder (DTTF) sorters was developed to improve fake track tagging and sorting. Zero suppression was implemented in the DT Sector Collector readout. The track finder b...

  9. TRIGGER

    CERN Multimedia

    Wesley Smith

    Trigger Hardware The status of the trigger components was presented during the September CMS Week and Annual Review and at the monthly trigger meetings in October and November. Procedures for cold and warm starts (e.g. refreshing of trigger parameters stored in registers) of the trigger subsystems have been studied. Reviews of parts of the Global Calorimeter Trigger (GCT) and the Global Trigger (GT) have taken place in October and November. The CERN group summarized the status of the Trigger Timing and Control (TTC) system. All TTC crates and boards are installed in the underground counting room, USC55. The central clock system will be upgraded in December (after the Global Run at the end of November GREN) to the new RF2TTC LHC machine interface timing module. Migration of subsystem's TTC PCs to SLC4/ XDAQ 3.12 is being prepared. Work is on going to unify the access to Local Timing Control (LTC) and TTC CMS interface module (TTCci) via SOAP (Simple Object Access Protocol, a lightweight XML-based messaging ...

  10. Adjacent Vehicle Number-Triggered Adaptive Transmission for V2V Communications

    Science.gov (United States)

    Wei, Yiqiao; Chen, Jingjun

    2018-01-01

    For vehicle-to-vehicle (V2V) communication, such issues as continuity and reliability still have to be solved. Specifically, it is necessary to consider a more scalable physical layer due to the high-speed mobility of vehicles and the complex channel environment. Adaptive transmission has been adapted in channel-dependent scheduling. However, it has been neglected with regards to the physical topology changes in the vehicle network. In this paper, we propose a physical topology-triggered adaptive transmission scheme which adjusts the data rate between vehicles according to the number of connectable vehicles nearby. Also, we investigate the performance of the proposed method using computer simulations and compare it with the conventional methods. The numerical results show that the proposed method can provide more continuous and reliable data transmission for V2V communications. PMID:29498646

  11. Instrumentation of a Level-1 Track Trigger in the ATLAS detector for the High Luminosity LHC

    CERN Document Server

    Boisvert, V; The ATLAS collaboration

    2012-01-01

    One of the main challenges in particle physics experiments at hadron colliders is to build detector systems that can take advantage of the future luminosity increase that will take place during the next decade. More than 200 simultaneous collisions will be recorded in a single event which will make the task to extract the interesting physics signatures harder than ever before. Not all events can be recorded hence a fast trigger system is required to select events that will be stored for further analysis. In the ATLAS experiment at the Large Hadron Collider (LHC) two different architectures for accommodating a level-1 track trigger are being investigated. The tracker has more readout channels than can be readout in time for the trigger decision. Both architectures aim for a data reduction of 10-100 in order to make readout of data possible in time for a level-1 trigger decision. In the first architecture the data reduction is achieved by reading out only parts of the detector seeded by a high rate pre-trigger ...

  12. The design and performance of the ATLAS Inner Detector trigger in high pileup collisions at 13 TeV at the Large Hadron Collider

    CERN Document Server

    Grandi, Mario; The ATLAS collaboration

    2018-01-01

    The design and performance of the ATLAS Inner Detector (ID) trigger algorithms running online on the High Level Trigger (HLT) processor farm for 13 TeV LHC collision data with high pileup are discussed. The HLT ID tracking is a vital component in all physics signatures in the ATLAS Trigger for the precise selection of the rare or interesting events necessary for physics analysis without overwhelming the offline data storage in terms of both size and rate. To cope with the high interaction rates expected in the 13 TeV LHC collisions the ID trigger was redesigned during the 2013-15 long shutdown. The performance of the ID Trigger in both the 2016 and 2017 data from 13 TeV LHC collisions has been excellent and exceeded expectations, even at the very high interaction multiplicities observed at the end of data taking in 2017. The detailed efficiencies and resolutions of the trigger in a wide range of physics signatures are presented for the Run 2 data, illustrating the superb performance of the ID trigger algorith...

  13. Development of high-reliability control system for nuclear power plants

    International Nuclear Information System (INIS)

    Asami, K.; Yanai, K.; Hirose, H.; Ito, T.

    1983-01-01

    In Japan, many nuclear power generating plants are in operation and under construction. There is a general awareness of the problems in connection with nuclear power generation and strong emphasis is put on achieving highly reliable operation of nuclear power plants. Hitachi has developed a new high-reliability control system. NURECS-3000 (NUclear Power Plant High-REliability Control System), which is applied to the main control systems, such as the reactor feedwater control system, the reactor recirculation control system and the main turbine control system. The NURECS-3000 system was designed taking into account the fact that there will be failures, but the aim is for the system to continue to function correctly; it is therefore a fault-tolerant system. It has redundant components which can be completely isolated from each other in order to prevent fault propagation. The system has a hierarchical configuration, with a main controller, consisting of a triplex microcomputer system, and sub-loop controllers. Special care was taken to ensure the independence of these subsystems. Since most of the redundant system failures are caused by common-mode failures and the reliability of redundant systems depends on the reliability of the common-mode parts, the aim was to minimize these parts. (author)

  14. High-Resolution Phenotypic Landscape of the RNA Polymerase II Trigger Loop.

    Directory of Open Access Journals (Sweden)

    Chenxi Qiu

    2016-11-01

    Full Text Available The active sites of multisubunit RNA polymerases have a "trigger loop" (TL that multitasks in substrate selection, catalysis, and translocation. To dissect the Saccharomyces cerevisiae RNA polymerase II TL at individual-residue resolution, we quantitatively phenotyped nearly all TL single variants en masse. Three mutant classes, revealed by phenotypes linked to transcription defects or various stresses, have distinct distributions among TL residues. We find that mutations disrupting an intra-TL hydrophobic pocket, proposed to provide a mechanism for substrate-triggered TL folding through destabilization of a catalytically inactive TL state, confer phenotypes consistent with pocket disruption and increased catalysis. Furthermore, allele-specific genetic interactions among TL and TL-proximal domain residues support the contribution of the funnel and bridge helices (BH to TL dynamics. Our structural genetics approach incorporates structural and phenotypic data for high-resolution dissection of transcription mechanisms and their evolution, and is readily applicable to other essential yeast proteins.

  15. High reliability megawatt transformer/rectifier

    Science.gov (United States)

    Zwass, Samuel; Ashe, Harry; Peters, John W.

    1991-01-01

    The goal of the two phase program is to develop the technology and design and fabricate ultralightweight high reliability DC to DC converters for space power applications. The converters will operate from a 5000 V dc source and deliver 1 MW of power at 100 kV dc. The power weight density goal is 0.1 kg/kW. The cycle to cycle voltage stability goals was + or - 1 percent RMS. The converter is to operate at an ambient temperature of -40 C with 16 minute power pulses and one hour off time. The uniqueness of the design in Phase 1 resided in the dc switching array which operates the converter at 20 kHz using Hollotron plasma switches along with a specially designed low loss, low leakage inductance and a light weight high voltage transformer. This approach reduced considerably the number of components in the converter thereby increasing the system reliability. To achieve an optimum transformer for this application, the design uses four 25 kV secondary windings to produce the 100 kV dc output, thus reducing the transformer leakage inductance, and the ac voltage stresses. A specially designed insulation system improves the high voltage dielectric withstanding ability and reduces the insulation path thickness thereby reducing the component weight. Tradeoff studies and tests conducted on scaled-down model circuits and using representative coil insulation paths have verified the calculated transformer wave shape parameters and the insulation system safety. In Phase 1 of the program a converter design approach was developed and a preliminary transformer design was completed. A fault control circuit was designed and a thermal profile of the converter was also developed.

  16. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.

    2012-07-14

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due to uncertainties in input data, model assumptions, and modeling procedures. However, fault model uncertainties have usually been ignored in stress-triggering studies and have not been propagated to assess the reliability of Coulomb failure stress change (ΔCFS) calculations. We show how these uncertainties can be used to provide confidence intervals for co-seismic ΔCFS-values. We demonstrate this for the MW = 5.9 June 2000 Kleifarvatn earthquake in southwest Iceland and systematically map these uncertainties. A set of 2500 candidate source models from the full posterior fault-parameter distribution was used to compute 2500 ΔCFS maps. We assess the reliability of the ΔCFS-values from the coefficient of variation (CV) and deem ΔCFS-values to be reliable where they are at least twice as large as the standard deviation (CV ≤ 0.5). Unreliable ΔCFS-values are found near the causative fault and between lobes of positive and negative stress change, where a small change in fault strike causes ΔCFS-values to change sign. The most reliable ΔCFS-values are found away from the source fault in the middle of positive and negative ΔCFS-lobes, a likely general pattern. Using the reliability criterion, our results support the static stress-triggering hypothesis. Nevertheless, our analysis also suggests that results from previous stress-triggering studies not considering source model uncertainties may have lead to a biased interpretation of the importance of static stress-triggering.

  17. A proposed Drift Tubes-seeded muon track trigger for the CMS experiment at the High Luminosity-LHC

    CERN Document Server

    AUTHOR|(CDS)2070813; Lazzizzera, Ignazio; Vanini, Sara; Zotto, Pierluigi

    2016-01-01

    The LHC program at 13 and 14 TeV, after the observation of the candidate SM Higgs boson, will help clarify future subjects of study and shape the needed tools. Any upgrade of the LHC experiments for unprecedented luminosities, such as the High Luminosity-LHC ones, must then maintain the acceptance on electroweak processes that can lead to a detailed study of the properties of the candidate Higgs boson. The acceptance of the key lepton, photon and hadron triggers should be kept such that the overall physics acceptance, in particular for low-mass scale processes, can be the same as the one the experiments featured in 2012.In such a scenario, a new approach to early trigger implementation is needed. One of the major steps will be the inclusion of high-granularity tracking sub-detectors, such as the CMS Silicon Tracker, in taking the early trigger decision. This contribution can be crucial in several tasks, including the confirmation of triggers in other subsystems, and the improvement of the on-line momentum mea...

  18. Achieving High Reliability Operations Through Multi-Program Integration

    Energy Technology Data Exchange (ETDEWEB)

    Holly M. Ashley; Ronald K. Farris; Robert E. Richards

    2009-04-01

    Over the last 20 years the Idaho National Laboratory (INL) has adopted a number of operations and safety-related programs which has each periodically taken its turn in the limelight. As new programs have come along there has been natural competition for resources, focus and commitment. In the last few years, the INL has made real progress in integrating all these programs and are starting to realize important synergies. Contributing to this integration are both collaborative individuals and an emerging shared vision and goal of the INL fully maturing in its high reliability operations. This goal is so powerful because the concept of high reliability operations (and the resulting organizations) is a masterful amalgam and orchestrator of the best of all the participating programs (i.e. conduct of operations, behavior based safety, human performance, voluntary protection, quality assurance, and integrated safety management). This paper is a brief recounting of the lessons learned, thus far, at the INL in bringing previously competing programs into harmony under the goal (umbrella) of seeking to perform regularly as a high reliability organization. In addition to a brief diagram-illustrated historical review, the authors will share the INL’s primary successes (things already effectively stopped or started) and the gaps yet to be bridged.

  19. ATLAS High-Level Trigger Performance for Calorimeter-Based Algorithms in LHC Run-I

    CERN Document Server

    Mann, A; The ATLAS collaboration

    2013-01-01

    The ATLAS detector operated during the three years of the Run-I of the Large Hadron Collider collecting information on a large number of proton-proton events. One the most important results obtained so far is the discovery of one Higgs boson. More precise measurements of this particle must be performed as well as there are other very important physics topics still to be explored. One of the key components of the ATLAS detector is its trigger system. It is composed of three levels: one (called Level 1 - L1) built on custom hardware and the two others based on software algorithms - called Level 2 (L2) and Event Filter (EF) – altogether referred to as the ATLAS High Level Trigger. The ATLAS trigger is responsible for reducing almost 20 million of collisions per second produced by the accelerator to less than 1000. The L2 operates only in the regions tagged by the first hardware level as containing possible interesting physics while the EF operates in the full detector, normally using offline-like algorithms to...

  20. The design and performance of the ATLAS Inner Detector trigger in high pileup collisions at 13 TeV at the Large Hadron Collider

    CERN Document Server

    Sotiropoulou, Calliope Louisa; The ATLAS collaboration

    2017-01-01

    The design and performance of the ATLAS Inner Detector (ID) trigger algorithms running online on the high level trigger (HLT) processor farm for 13 TeV LHC collision data with high pileup are discussed. The HLT ID tracking is a vital component in all physics signatures in the ATLAS Trigger for the precise selection of the rare or interesting events necessary for physics analysis without overwhelming the offine data storage in terms of both size and rate. To cope with the high expected interaction rates in the 13 TeV LHC collisions the ID trigger was redesigned during the 2013-15 long shutdown. The performance of the ID Trigger in the 2016 data from 13 TeV LHC collisions has been excellent and exceeded expectations as the interaction multiplicity increased throughout the year. The detailed efficiencies and resolutions of the trigger in a wide range of physics signatures are presented, to demonstrate how the trigger responded well under the extreme pileup conditions. The performance of the ID Trigger algorithms...

  1. The design and performance of the ATLAS Inner Detector trigger in high pileup collisions at 13 TeV at the Large Hadron Collider

    CERN Document Server

    Kilby, Callum; The ATLAS collaboration

    2017-01-01

    The design and performance of the ATLAS Inner Detector (ID) trigger algorithms running online on the high level trigger (HLT) processor farm for 13 TeV LHC collision data with high pileup are discussed. The HLT ID tracking is a vital component in all physics signatures in the ATLAS Trigger for the precise selection of the rare or interesting events necessary for physics analysis without overwhelming the offline data storage in terms of both size and rate. To cope with the high expected interaction rates in the 13 TeV LHC collisions the ID trigger was redesigned during the 2013-15 long shutdown. The performance of the ID Trigger in the 2016 data from 13 TeV LHC collisions has been excellent and exceeded expectations as the interaction multiplicity increased throughout the year. The detailed efficiencies and resolutions of the trigger in a wide range of physics signatures are presented, to demonstrate how the trigger responded well under the extreme pileup conditions. The performance of the ID Trigger algorithm...

  2. High Reliability Cryogenic Piezoelectric Valve Actuator, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Cryogenic fluid valves are subject to harsh exposure and actuators to drive these valves require robust performance and high reliability. DSM's piezoelectric...

  3. Graphical processors for HEP trigger systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R. [INFN Sezione di Roma Tor Vergata, Via della Ricerca Scientifica, 1, 00133 Roma (Italy); Biagioni, A. [INFN Sezione di Roma, P.le Aldo Moro, 2, 00185 Roma (Italy); Chiozzi, S.; Cotta Ramusino, A. [INFN Sezione di Ferrara, Via Saragat, 1, 44122 Ferrara (Italy); Di Lorenzo, S. [INFN Sezione di Pisa, L. Bruno Pontecorvo, 3, 56127 Pisa (Italy); Università di Pisa, Lungarno Pacinotti 43, 56126 Pisa (Italy); Fantechi, R. [INFN Sezione di Pisa, L. Bruno Pontecorvo, 3, 56127 Pisa (Italy); Fiorini, M. [INFN Sezione di Ferrara, Via Saragat, 1, 44122 Ferrara (Italy); Università di Ferrara, Via Ludovico Ariosto 35, 44121 Ferrara (Italy); Frezza, O. [INFN Sezione di Roma, P.le Aldo Moro, 2, 00185 Roma (Italy); Lamanna, G. [INFN, Laboratori Nazionali di Frascati (Italy); Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P.S.; Pastorelli, E. [INFN Sezione di Roma, P.le Aldo Moro, 2, 00185 Roma (Italy); Piandani, R. [INFN Sezione di Pisa, L. Bruno Pontecorvo, 3, 56127 Pisa (Italy); Pontisso, L., E-mail: luca.pontisso@cern.ch [INFN Sezione di Pisa, L. Bruno Pontecorvo, 3, 56127 Pisa (Italy); Rossetti, D. [NVIDIA Corp., Santa Clara, CA (United States); Simula, F. [INFN Sezione di Roma, P.le Aldo Moro, 2, 00185 Roma (Italy); Sozzi, M. [INFN Sezione di Pisa, L. Bruno Pontecorvo, 3, 56127 Pisa (Italy); Università di Pisa, Lungarno Pacinotti 43, 56126 Pisa (Italy); and others

    2017-02-11

    General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to employ GPUs as accelerators in offline computations. With the steady decrease of GPU latencies and the increase in link and memory throughputs, time is ripe for real-time applications using GPUs in high-energy physics data acquisition and trigger systems. We will discuss the use of online parallel computing on GPUs for synchronous low level trigger systems, focusing on tests performed on the trigger of the CERN NA62 experiment. Latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Moreover, we discuss how specific trigger algorithms can be parallelised and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen LHC luminosity upgrade where highly selective algorithms will be crucial to maintain sustainable trigger rates with very high pileup.

  4. Graphical processors for HEP trigger systems

    International Nuclear Information System (INIS)

    Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P.S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.

    2017-01-01

    General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to employ GPUs as accelerators in offline computations. With the steady decrease of GPU latencies and the increase in link and memory throughputs, time is ripe for real-time applications using GPUs in high-energy physics data acquisition and trigger systems. We will discuss the use of online parallel computing on GPUs for synchronous low level trigger systems, focusing on tests performed on the trigger of the CERN NA62 experiment. Latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Moreover, we discuss how specific trigger algorithms can be parallelised and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen LHC luminosity upgrade where highly selective algorithms will be crucial to maintain sustainable trigger rates with very high pileup.

  5. High frame rate retrospectively triggered Cine MRI for assessment of murine diastolic function

    NARCIS (Netherlands)

    Coolen, Bram F.; Abdurrachim, Desiree; Motaal, Abdallah G.; Nicolay, Klaas; Prompers, Jeanine J.; Strijkers, Gustav J.

    2013-01-01

    To assess left ventricular (LV) diastolic function in mice with Cine MRI, a high frame rate (>60 frames per cardiac cycle) is required. For conventional electrocardiography-triggered Cine MRI, the frame rate is inversely proportional to the pulse repetition time (TR). However, TR cannot be lowered

  6. Low vs. high haemoglobin trigger for transfusion in vascular surgery

    DEFF Research Database (Denmark)

    Møller, A; Nielsen, H B; Wetterslev, J

    2017-01-01

    of the infrarenal aorta or infrainguinal arterial bypass surgery undergo a web-based randomisation to one of two groups: perioperative RBC transfusion triggered by hb ...-up of serious adverse events in the Danish National Patient Register within 90 days is pending. DISCUSSION: This trial is expected to determine whether a RBC transfusion triggered by hb

  7. Monitoring and Tracking the LHC Beam Spot within the ATLAS High Level Trigger

    CERN Document Server

    Winklmeier, F; The ATLAS collaboration

    2012-01-01

    The parameters of the beam spot produced by the LHC in the ATLAS interaction region are computed online using the ATLAS High Level Trigger (HLT) system. The high rate of triggered events is exploited to make precise measurements of the position, size and orientation of the luminous region in near real-time, as these parameters change significantly even during a single data-taking run. We present the challenges, solutions and results for the online determination, monitoring and beam spot feedback system in ATLAS. A specially designed algorithm, which uses tracks registered in the silicon detectors to reconstruct event vertices, is executed on the HLT processor farm of several thousand CPU cores. Monitoring histograms from all the cores are sampled and aggregated across the farm every 60 seconds. The reconstructed beam values are corrected for detector resolution effects, measured in situ from the separation of vertices whose tracks have been split into two collections. Furthermore, measurements for individual ...

  8. Coronary calcium screening with dual-source CT: reliability of ungated, high-pitch chest CT in comparison with dedicated calcium-scoring CT

    Energy Technology Data Exchange (ETDEWEB)

    Hutt, Antoine; Faivre, Jean-Baptiste; Remy, Jacques; Remy-Jardin, Martine [CHRU et Universite de Lille, Department of Thoracic Imaging, Hospital Calmette (EA 2694), Lille (France); Duhamel, Alain; Deken, Valerie [CHRU et Universite de Lille, Department of Biostatistics (EA 2694), Lille (France); Molinari, Francesco [Centre Hospitalier General de Tourcoing, Department of Radiology, Tourcoing (France)

    2016-06-15

    To investigate the reliability of ungated, high-pitch dual-source CT for coronary artery calcium (CAC) screening. One hundred and eighty-five smokers underwent a dual-source CT examination with acquisition of two sets of images during the same session: (a) ungated, high-pitch and high-temporal resolution acquisition over the entire thorax (i.e., chest CT); (b) prospectively ECG-triggered acquisition over the cardiac cavities (i.e., cardiac CT). Sensitivity and specificity of chest CT for detecting positive CAC scores were 96.4 % and 100 %, respectively. There was excellent inter-technique agreement for determining the quantitative CAC score (ICC = 0.986). The mean difference between the two techniques was 11.27, representing 1.81 % of the average of the two techniques. The inter-technique agreement for categorizing patients into the four ranks of severity was excellent (weighted kappa = 0.95; 95 % CI 0.93-0.98). The inter-technique differences for quantitative CAC scores did not correlate with BMI (r = 0.05, p = 0.575) or heart rate (r = -0.06, p = 0.95); 87.2 % of them were explained by differences at the level of the right coronary artery (RCA: 0.8718; LAD: 0.1008; LCx: 0.0139; LM: 0.0136). Ungated, high-pitch dual-source CT is a reliable imaging mode for CAC screening in the conditions of routine chest CT examinations. (orig.)

  9. High-Reliable PLC RTOS Development and RPS Structure Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, H. S.; Song, D. Y.; Sohn, D. S.; Kim, J. H. [Enersys Co., Daejeon (Korea, Republic of)

    2008-04-15

    One of the KNICS objectives is to develop a platform for Nuclear Power Plant(NPP) I and C(Instrumentation and Control) system, especially plant protection system. The developed platform is POSAFE-Q and this work supports the development of POSAFE-Q with the development of high-reliable real-time operating system(RTOS) and programmable logic device(PLD) software. Another KNICS objective is to develop safety I and C systems, such as Reactor Protection System(RPS) and Engineered Safety Feature-Component Control System(ESF-CCS). This work plays an important role in the structure analysis for RPS. Validation and verification(V and V) of the safety critical software is an essential work to make digital plant protection system highly reliable and safe. Generally, the reliability and safety of software based system can be improved by strict quality assurance framework including the software development itself. In other words, through V and V, the reliability and safety of a system can be improved and the development activities like software requirement specification, software design specification, component tests, integration tests, and system tests shall be appropriately documented for V and V.

  10. High-Reliable PLC RTOS Development and RPS Structure Analysis

    International Nuclear Information System (INIS)

    Sohn, H. S.; Song, D. Y.; Sohn, D. S.; Kim, J. H.

    2008-04-01

    One of the KNICS objectives is to develop a platform for Nuclear Power Plant(NPP) I and C(Instrumentation and Control) system, especially plant protection system. The developed platform is POSAFE-Q and this work supports the development of POSAFE-Q with the development of high-reliable real-time operating system(RTOS) and programmable logic device(PLD) software. Another KNICS objective is to develop safety I and C systems, such as Reactor Protection System(RPS) and Engineered Safety Feature-Component Control System(ESF-CCS). This work plays an important role in the structure analysis for RPS. Validation and verification(V and V) of the safety critical software is an essential work to make digital plant protection system highly reliable and safe. Generally, the reliability and safety of software based system can be improved by strict quality assurance framework including the software development itself. In other words, through V and V, the reliability and safety of a system can be improved and the development activities like software requirement specification, software design specification, component tests, integration tests, and system tests shall be appropriately documented for V and V.

  11. TRIGGER

    CERN Multimedia

    Wesley Smith

    Level-1 Trigger Hardware and Software The final parts of the Level-1 trigger hardware are now being put in place. For the ECAL endcaps, more than half of the Trigger Concentrator Cards for the ECAL Endcap (TCC-EE) are now available at CERN, such that one complete endcap can be covered. The Global Trigger now correctly handles ECAL calibration sequences, without being influenced by backpressure. The Regional Calorimeter Trigger (RCT) hardware is complete and working in USC55. Intra-crate tests of all 18 RCT crates and the Global Calorimeter Trigger (GCT) are regularly taking place. Pattern tests have successfully captured data from HCAL through RCT to the GCT Source Cards. HB/HE trigger data are being compared with emulator results to track down the very few remaining hardware problems. The treatment of hot and dead cells, including their recording in the database, has been defined. For the GCT, excellent agreement between the emulator and data has been achieved for jets and HF ET sums. There is still som...

  12. A muon trigger for the MACRO apparatus

    International Nuclear Information System (INIS)

    Barbarito, E.; Bellotti, R.; Calicchio, M.; Castellano, M.; DeCataldo, G.; DeMarzo, C.; Erriquez, O.; Favuzzi, C.; Giglietto, N.; Liuzzi, R.; Spinelli, P.

    1991-01-01

    A trigger circuit based on EPROM components, able to manage up to 30 lines from independent counters, is described. The circuit has been designed and used in the MACRO apparatus at the Gran Sasso Laboratory for triggering on fast particles. The circuit works with standard TTL positive logic and is assembled in a double standard CAMAC module. It has a high triggering capacity and a high flexibility. (orig.)

  13. Real Time Global Tests of the ALICE High Level Trigger Data Transport Framework

    CERN Document Server

    Becker, B.; Cicalo J.; Cleymans, C.; de Vaux, G.; Fearick, R.W.; Lindenstruth, V.; Richter, M.; Rorich, D.; Staley, F.; Steinbeck, T.M.; Szostak, A.; Tilsner, H.; Weis, R.; Vilakazi, Z.Z.

    2008-01-01

    The High Level Trigger (HLT) system of the ALICE experiment is an online event filter and trigger system designed for input bandwidths of up to 25 GB/s at event rates of up to 1 kHz. The system is designed as a scalable PC cluster, implementing several hundred nodes. The transport of data in the system is handled by an object-oriented data flow framework operating on the basis of the publisher-subscriber principle, being designed fully pipelined with lowest processing overhead and communication latency in the cluster. In this paper, we report the latest measurements where this framework has been operated on five different sites over a global north-south link extending more than 10,000 km, processing a ``real-time'' data flow.

  14. High pressure, high current, low inductance, high reliability sealed terminals

    Science.gov (United States)

    Hsu, John S [Oak Ridge, TN; McKeever, John W [Oak Ridge, TN

    2010-03-23

    The invention is a terminal assembly having a casing with at least one delivery tapered-cone conductor and at least one return tapered-cone conductor routed there-through. The delivery and return tapered-cone conductors are electrically isolated from each other and positioned in the annuluses of ordered concentric cones at an off-normal angle. The tapered cone conductor service can be AC phase conductors and DC link conductors. The center core has at least one service conduit of gate signal leads, diagnostic signal wires, and refrigerant tubing routed there-through. A seal material is in direct contact with the casing inner surface, the tapered-cone conductors, and the service conduits thereby hermetically filling the interstitial space in the casing interior core and center core. The assembly provides simultaneous high-current, high-pressure, low-inductance, and high-reliability service.

  15. Assessing high reliability via Bayesian approach and accelerated tests

    International Nuclear Information System (INIS)

    Erto, Pasquale; Giorgio, Massimiliano

    2002-01-01

    Sometimes the assessment of very high reliability levels is difficult for the following main reasons: - the high reliability level of each item makes it impossible to obtain, in a reasonably short time, a sufficient number of failures; - the high cost of the high reliability items to submit to life tests makes it unfeasible to collect enough data for 'classical' statistical analyses. In the above context, this paper presents a Bayesian solution to the problem of estimation of the parameters of the Weibull-inverse power law model, on the basis of a limited number (say six) of life tests, carried out at different stress levels, all higher than the normal one. The over-stressed (i.e. accelerated) tests allow the use of experimental data obtained in a reasonably short time. The Bayesian approach enables one to reduce the required number of failures adding to the failure information the available a priori engineers' knowledge. This engineers' involvement conforms to the most advanced management policy that aims at involving everyone's commitment in order to obtain total quality. A Monte Carlo study of the non-asymptotic properties of the proposed estimators and a comparison with the properties of maximum likelihood estimators closes the work

  16. The Resource utilization by ATLAS High Level Triggers. The contributed talk for the Technology and Instrumentation in Particle Physics 2011.

    CERN Document Server

    Ospanov, R; The ATLAS collaboration

    2011-01-01

    In 2010 the ATLAS experiment has successfully recorded data from LHC collisions with high efficiency and excellent data quality. ATLAS employs a three-level trigger system to select events of interest for physics analyses and detector commissioning. The trigger system consists of a custom-designed hardware trigger at level-1 (L1) and software algorithms executing on commodity servers at the two higher levels: second level trigger (L2) and event filter (EF). The corresponding trigger rates are 75~kHz, 3~kHz and 200~Hz. The L2 uses custom algorithms to examine a small fraction of data at full detector granularity in Regions of Interest selected by the L1. The EF employs offline algorithms and full detector data for more computationally intensive analysis. The trigger selection is defined by trigger menus which consist of more than 500 individual trigger signatures, such as electrons, muons, particle jets, etc. An execution of a trigger signature incurs computing and data storage costs. A composition of the depl...

  17. The ATLAS hadronic tau trigger

    CERN Document Server

    Black, C; The ATLAS collaboration

    2012-01-01

    With the high luminosities of proton-proton collisions achieved at the LHC, the strategies for triggering have become more important than ever for physics analysis. The naive inclusive single tau lepton triggers now suffer from severe rate limitations. To allow for a large program of physics analyses with taus, the development of topological triggers that combine tau signatures with other measured quantities in the event is required. These combined triggers open many opportunities to study new physics beyond the Standard Model and to search for the Standard Model Higgs. We present the status and performance of the hadronic tau trigger in ATLAS. We demonstrate that the ATLAS tau trigger ran remarkably well over 2011, and how the lessons learned from 2011 led to numerous improvements in the preparation of the 2012 run. These improvements include the introduction of tau selection criteria that are robust against varying pileup scenarios, and the implementation of multivariate selection techniques in the tau trig...

  18. The ATLAS hadronic tau trigger

    CERN Document Server

    Black, C; The ATLAS collaboration

    2012-01-01

    With the high luminosities of proton-proton collisions achieved at the LHC, the strategies for triggering have become more important than ever for physics analysis. The naïve inclusive single tau lepton triggers now suffer from severe rate limitations. To allow for a large program of physics analyses with taus, the development of topological triggers that combine tau signatures with other measured quantities in the event is required. These combined triggers open many opportunities to study new physics beyond the Standard Model and to search for the Standard Model Higgs. We present the status and performance of the hadronic tau trigger in ATLAS. We demonstrate that the ATLAS tau trigger ran remarkably well over 2011, and how the lessons learned from 2011 led to numerous improvements in the preparation of the 2012 run. These improvements include the introduction of tau selection criteria that are robust against varying pileup scenarios, and the implementation of multivariate selection techniques in the tau tri...

  19. Human reliability in high dose rate afterloading radiotherapy based on FMECA

    International Nuclear Information System (INIS)

    Deng Jun; Fan Yaohua; Yue Baorong; Wei Kedao; Ren Fuli

    2012-01-01

    Objective: To put forward reasonable and feasible recommendations against the procedure with relative high risk during the high dose rate (HDR) afterloading radiotherapy, so as to enhance its clinical application safety, through studying the human reliability in the process of carrying out the HDR afterloading radiotherapy. Methods: Basic data were collected by on-site investigation and process analysis as well as expert evaluation. Failure mode, effect and criticality analysis (FMECA) employed to study the human reliability in the execution of HDR afterloading radiotherapy. Results: The FMECA model of human reliability for HDR afterloading radiotherapy was established, through which 25 procedures with relative high risk index were found,accounting for 14.1% of total 177 procedures. Conclusions: FMECA method in human reliability study for HDR afterloading radiotherapy is feasible. The countermeasures are put forward to reduce the human error, so as to provide important basis for enhancing clinical application safety of HDR afterloading radiotherapy. (authors)

  20. BTeV detached vertex trigger

    International Nuclear Information System (INIS)

    Gottschalk, E.E.

    2001-01-01

    BTeV is a collider experiment that has been approved to run in the Tevatron at Fermilab. The experiment will conduct precision studies of CP violation using a forward-geometry detector. The detector will be optimized for high-rate detection of beauty and charm particles produced in collisions between protons and anti-protons. BTeV will trigger on beauty and charm events by taking advantage of the main difference between these heavy quark events and more typical hadronic events - the presence of detached beauty and charm decay vertices. The first stage of the BTeV trigger will receive data from a pixel vertex detector at a rate of 100 gb s -1 , reconstruct tracks and vertices for every beam crossing, reject 99% of beam crossings that do not produce beauty or charm particles, and trigger on beauty events with high efficiency. An overview of the trigger design and its influence on the design of the pixel vertex detector is presented

  1. Bar Code Medication Administration Technology: Characterization of High-Alert Medication Triggers and Clinician Workarounds.

    Science.gov (United States)

    Miller, Daniel F; Fortier, Christopher R; Garrison, Kelli L

    2011-02-01

    Bar code medication administration (BCMA) technology is gaining acceptance for its ability to prevent medication administration errors. However, studies suggest that improper use of BCMA technology can yield unsatisfactory error prevention and introduction of new potential medication errors. To evaluate the incidence of high-alert medication BCMA triggers and alert types and discuss the type of nursing and pharmacy workarounds occurring with the use of BCMA technology and the electronic medication administration record (eMAR). Medication scanning and override reports from January 1, 2008, through November 30, 2008, for all adult medical/surgical units were retrospectively evaluated for high-alert medication system triggers, alert types, and override reason documentation. An observational study of nursing workarounds on an adult medicine step-down unit was performed and an analysis of potential pharmacy workarounds affecting BCMA and the eMAR was also conducted. Seventeen percent of scanned medications triggered an error alert of which 55% were for high-alert medications. Insulin aspart, NPH insulin, hydromorphone, potassium chloride, and morphine were the top 5 high-alert medications that generated alert messages. Clinician override reasons for alerts were documented in only 23% of administrations. Observational studies assessing for nursing workarounds revealed a median of 3 clinician workarounds per administration. Specific nursing workarounds included a failure to scan medications/patient armband and scanning the bar code once the dosage has been removed from the unit-dose packaging. Analysis of pharmacy order entry process workarounds revealed the potential for missed doses, duplicate doses, and doses being scheduled at the wrong time. BCMA has the potential to prevent high-alert medication errors by alerting clinicians through alert messages. Nursing and pharmacy workarounds can limit the recognition of optimal safety outcomes and therefore workflow processes

  2. BTeV trigger/DAQ innovations

    International Nuclear Information System (INIS)

    Votava, Margaret

    2005-01-01

    The BTeV experiment was a collider based high energy physics (HEP) B-physics experiment proposed at Fermilab. It included a large-scale, high speed trigger/data acquisition (DAQ) system, reading data off the detector at 500 Gbytes/sec and writing to mass storage at 200 Mbytes/sec. The online design was considered to be highly credible in terms of technical feasibility, schedule and cost. This paper will give an overview of the overall trigger/DAQ architecture, highlight some of the challenges, and describe the BTeV approach to solving some of the technical challenges. At the time of termination in early 2005, the experiment had just passed its baseline review. Although not fully implemented, many of the architecture choices, design, and prototype work for the online system (both trigger and DAQ) were well on their way to completion. Other large, high-speed online systems may have interest in the some of the design choices and directions of BTeV, including (a) a commodity-based tracking trigger running asynchronously at full rate, (b) the hierarchical control and fault tolerance in a large real time environment, (c) a partitioning model that supports offline processing on the online farms during idle periods with plans for dynamic load balancing, and (d) an independent parallel highway architecture

  3. TRIGGER

    CERN Multimedia

    W. Smith

    At the March meeting, the CMS trigger group reported on progress in production, tests in the Electronics Integration Center (EIC) in Prevessin 904, progress on trigger installation in the underground counting room at point 5, USC55, the program of trigger pattern tests and vertical slice tests and planning for the Global Runs starting this summer. The trigger group is engaged in the final stages of production testing, systems integration, and software and firmware development. Most systems are delivering final tested electronics to CERN. The installation in USC55 is underway and integration testing is in full swing. A program of orderly connection and checkout with subsystems and central systems has been developed. This program includes a series of vertical subsystem slice tests providing validation of a portion of each subsystem from front-end electronics through the trigger and DAQ to data captured and stored. After full checkout, trigger subsystems will be then operated in the CMS Global Runs. Continuous...

  4. TRIGGER

    CERN Multimedia

    Wesley Smith

    2011-01-01

    Level-1 Trigger Hardware and Software New Forward Scintillating Counters (FSC) for rapidity gap measurements have been installed and integrated into the Trigger recently. For the Global Muon Trigger, tuning of quality criteria has led to improvements in muon trigger efficiencies. Several subsystems have started campaigns to increase spares by recovering boards or producing new ones. The barrel muon sector collector test system has been reactivated, new η track finder boards are in production, and φ track finder boards are under revision. In the CSC track finder, an η asymmetry problem has been corrected. New pT look-up tables have also improved efficiency. RPC patterns were changed from four out of six coincident layers to three out of six in the barrel, which led to a significant increase in efficiency. A new PAC firmware to trigger on heavy stable charged particles allows looking for chamber hit coincidences in two consecutive bunch-crossings. The redesign of the L1 Trigger Emulator...

  5. TRIGGER

    CERN Multimedia

    W. Smith from contributions of C. Leonidopoulos, I. Mikulec, J. Varela and C. Wulz.

    Level-1 Trigger Hardware and Software Over the past few months, the Level-1 trigger has successfully recorded data with cosmic rays over long continuous stretches as well as LHC splash events, beam halo, and collision events. The L1 trigger hardware, firmware, synchronization, performance and readiness for beam operation were reviewed in October. All L1 trigger hardware is now installed at Point 5, and most of it is completely commissioned. While the barrel ECAL Trigger Concentrator Cards are fully operational, the recently delivered endcap ECAL TCC system is still being commissioned. For most systems there is a sufficient number of spares available, but for a few systems additional reserve modules are needed. It was decided to increase the overall L1 latency by three bunch crossings to increase the safety margin for trigger timing adjustments. In order for CMS to continue data taking during LHC frequency ramps, the clock distribution tree needs to be reset. The procedures for this have been tested. A repl...

  6. Reliability studies of high operating temperature MCT photoconductor detectors

    Science.gov (United States)

    Wang, Wei; Xu, Jintong; Zhang, Yan; Li, Xiangyang

    2010-10-01

    This paper concerns HgCdTe (MCT) infrared photoconductor detectors with high operating temperature. The near room temperature operation of detectors have advantages of light weight, less cost and convenient usage. Their performances are modest and they suffer from reliable problems. These detectors face with stability of the package, chip bonding area and passivation layers. It's important to evaluate and improve the reliability of such detectors. Defective detectors were studied with SEM(Scanning electron microscope) and microscopy. Statistically significant differences were observed between the influence of operating temperature and the influence of humidity. It was also found that humility has statistically significant influence upon the stability of the chip bonding and passivation layers, and the amount of humility isn't strongly correlated to the damage on the surface. Considering about the commonly found failures modes in detectors, special test structures were designed to improve the reliability of detectors. An accelerated life test was also implemented to estimate the lifetime of the high operating temperature MCT photoconductor detectors.

  7. Double prospectively ECG-triggered high-pitch spiral acquisition for CT coronary angiography: Initial experience

    International Nuclear Information System (INIS)

    Wang, Q.; Qin, J.; He, B.; Zhou, Y.; Yang, J.-J.; Hou, X.-L.; Yang, X.-B.; Chen, J.-H.; Chen, Y.-D.

    2013-01-01

    Aim: To evaluate the feasibility of double prospectively electrocardiogram (ECG)-triggered high-pitch spiral acquisition mode (double high-pitch mode) for coronary computed tomography angiography (CTCA). Materials and methods: One hundred and forty-nine consecutive patients [40 women, 109 men; mean age 58.2 ± 9.2 years; sinus rhythm ≤70 beats/min (bpm) after pre-medication, body weight ≤100 kg] were enrolled for CTCA examinations using a dual-source CT system with 2 × 128 × 0.6 mm collimation, 0.28 s rotation time, and a pitch of 3.4. Double high-pitch mode was prospectively triggered first at 60% and later at 30% of the R–R interval within two cardiac cycles. Image quality was evaluated using a four-point scale (1 = excellent, 4 = non-assessable). Results: From 2085 coronary artery segments, 86.4% (1802/2085) were rated as having a score of 1, 12.3% (257/2085) as score of 2, 1.2% (26/2085) as score of 3, and none were rated as “non-assessable”. The average image quality score was 1.15 ± 0.26 on a per-segment basis. The effective dose was calculated by multiplying the coefficient factor of 0.028 by the dose–length product (DLP); the mean effective dose was 3.5 ± 0.8 mSv (range 1.7–7.6 mSv). The total dosage of contrast medium was 78.7 ± 2.9 ml. Conclusion: Double prospectively ECG-triggered high-pitch spiral acquisition mode provides good image quality with an average effective dose of less than 5 mSv in patients with a heart rate ≤70 bpm

  8. Progress in the High Level Trigger Integration

    CERN Multimedia

    Cristobal Padilla

    2007-01-01

    During the week from March 19th to March 23rd, the DAQ/HLT group performed another of its technical runs. On this occasion the focus was on integrating the Level 2 and Event Filter triggers, with a much fuller integration of HLT components than had been done previously. For the first time this included complete trigger slices, with a menu to run the selection algorithms for muons, electrons, jets and taus at the Level-2 and Event Filter levels. This Technical run again used the "Pre-Series" system (a vertical slice prototype of the DAQ/HLT system, see the ATLAS e-news January issue for details). Simulated events, provided by our colleagues working in the streaming tests, were pre-loaded into the ROS (Read Out System) nodes. These are the PC's where the data from the detector is stored after coming out of the front-end electronics, the "first part of the TDAQ system" and the interface to the detectors. These events used a realistic beam interaction mixture and had been subjected to a Level-1 selection. The...

  9. The ATLAS Muon and Tau Trigger

    CERN Document Server

    Dell'Asta, L; The ATLAS collaboration

    2013-01-01

    [Muon] The ATLAS experiment at CERN's Large Hadron Collider (LHC) deploys a three-levels processing scheme for the trigger system. The level-1 muon trigger system gets its input from fast muon trigger detectors. Fast sector logic boards select muon candidates, which are passed via an interface board to the central trigger processor and then to the High Level Trigger (HLT). The muon HLT is purely software based and encompasses a level-2 (L2) trigger followed by an event filter (EF) for a staged trigger approach. It has access to the data of the precision muon detectors and other detector elements to refine the muon hypothesis. Trigger-specific algorithms were developed and are used for the L2 to increase processing speed for instance by making use of look-up tables and simpler algorithms, while the EF muon triggers mostly benefit from offline reconstruction software to obtain most precise determination of the track parameters. There are two algorithms with different approaches, namely inside-out and outside-in...

  10. The ATLAS Trigger: Recent Experience and Future Plans

    CERN Document Server

    The ATLAS collaboration

    2009-01-01

    This paper will give an overview of the ATLAS trigger design and its innovative features. It will describe the valuable experience gained in running the trigger reconstruction and event selection in the fastchanging environment of the detector commissioning during 2008. It will also include a description of the trigger selection menu and its 2009 deployment plan from first collisions to the nominal luminosity. ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). The trigger system needs to efficiently reject a large rate of background events and still select potentially interesting ones with high efficiency. After a first level trigger implemented in custom electronics, the trigger event selection is made by the High Level Trigger (HLT) system, implemented in software. To reduce the processing time to manageable levels, the HLT uses seeded, step-wise and fast selection algorithms, aiming at the earliest possible rejection of background events. The ATLAS trigger event selection...

  11. Recognising triggers for soft-sediment deformation: Current understanding and future directions

    Science.gov (United States)

    Owen, Geraint; Moretti, Massimo; Alfaro, Pedro

    2011-04-01

    Most of the 16 papers in this special issue were presented at a session entitled "The recognition of trigger mechanisms for soft-sediment deformation" at the 27th IAS Meeting of Sedimentology in Alghero, Sardinia, Italy, which took place from 20th-23rd September 2009. They describe soft-sediment deformation structures that range widely in morphology, age, depositional environment and tectonic setting. In their interpretations, the authors have been asked to focus on identifying the agent that triggered deformation. Our aims in this introductory overview are to: (1) review the definition and scope of soft-sediment deformation; (2) clarify the significance and role of the trigger; (3) set the contributions in context and summarise their findings; and (4) discuss strategies for reliably identifying triggers and make recommendations for future study of this widespread and significant category of sedimentary structures. We recommend a three-stage approach to trigger recognition, combining the assessment of facies, potential triggers, and available criteria. This focus on the trigger for deformation distinguishes this collection of papers on soft-sediment deformation from other important collections, notably those edited by Jones and Preston (1987), Maltman (1994), Maltman et al. (2000), Shiki et al. (2000), Ettensohn et al. (2002b), Van Rensbergen et al. (2003) and Storti and Vannucchi (2007).

  12. Electronics and triggering challenges for the CMS High Granularity Calorimeter

    CERN Document Server

    Lobanov, Artur

    2017-01-01

    The High Granularity Calorimeter (HGCAL), presently being designed by the CMS collaboration to replace the CMS endcap calorimeters for the High Luminosity phase of LHC, will feature six million channels distributed over 52 longitudinal layers. The requirements for the front-end electronics are extremely challenging, including high dynamic range (0-10 pC), low noise (~2000e- to be able to calibrate on single minimum ionising particles throughout the detector lifetime) and low power consumption (~10mW/channel), as well as the need to select and transmit trigger information with a high granularity. Exploiting the intrinsic precision-timing capabilities of silicon sensors also requires careful design of the front-end electronics as well as the whole system, particularly clock distribution. The harsh radiation environment and requirement to keep the whole detector as dense as possible will require novel solutions to the on-detector electronics layout. Processing all the data from the HGCAL imposes equally large ch...

  13. ATLAS Trigger Monitoring and Operation in Proton Proton Collisions at 900 GeV

    CERN Document Server

    zur Nedden, M; The ATLAS collaboration

    2010-01-01

    The trigger of the ATLAS-experiment is build as a three level system. The first level is realized in hardware while the higher levels (HLT) are pure software implemented triggers based on large PC farms. According to the LHC bunch crossing frequency of 40 MHz and the expectation of up to 23 interactions per bunch crossing at design luminosity, the trigger system must be able to deal with an input rate of 1 GHz whereas the maximum storage rate is 200 Hz. This complex data acquisition and trigger system requires a reliable and redundant diagnostic and monitoring system. This is inevitable for a successful commissioning and stable running of the whole experiment. The main aspects of trigger monitoring are the rate measurements at each step of the trigger decision at each level, the determination of the quality of the physics objects candidates to be selected at trigger level (as candidates for electrons, muons, taus, gammas, jets, b-jets and missing energy) and the supervision of the system's behavior during the...

  14. The ATLAS Electron and Photon Trigger

    CERN Document Server

    Jones, Samuel David; The ATLAS collaboration

    2018-01-01

    ATLAS electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential to record signals for a wide variety of physics: from Standard Model processes to searches for new phenomena. To cope with ever-increasing luminosity and more challenging pile-up conditions at a centre-of-mass energy of 13 TeV, the trigger selections need to be optimized to control the rates and keep efficiencies high. The ATLAS electron and photon trigger performance in Run 2 will be presented, including both the role of the ATLAS calorimeter in electron and photon identification and details of new techniques developed to maintain high performance even in high pile-up conditions.

  15. A self triggered intensified Ccd (Stic)

    International Nuclear Information System (INIS)

    Charon, Y.; Laniece, P.; Bendali, M.

    1990-01-01

    We are developing a new device based on the results reported previously of the successfull coincidence detection of β- particles with a high spatial resolution [1]. The novelty of the device consists in triggering an intensified CCD, i.e. a CCD coupled to an image intensifier (II), by an electrical signal collected from the II itself. This is a suitable procedure for detecting with high efficiency and high resolution low light rare events. The trigger pulse is obtained from the secondary electrons produced by multiplication in a double microchannel plate (MCP) and collected on the aluminized layer protecting the phosphor screen in the II. Triggering efficiencies up to 80% has been already achieved

  16. The ATLAS Tau Trigger

    CERN Document Server

    Rados, PK; The ATLAS collaboration

    2014-01-01

    Physics processes involving tau leptons play a crucial role in understanding particle physics at the high energy frontier. The ability to efficiently trigger on events containing hadronic tau decays is therefore of particular importance to the ATLAS experiment. During the 2012 run, the Large Hadronic Collder (LHC) reached instantaneous luminosities of nearly $10^{34} cm^{-2}s^{-1}$ with bunch crossings occurring every $50 ns$. This resulted in a huge event rate and a high probability of overlapping interactions per bunch crossing (pile-up). With this in mind it was necessary to design an ATLAS tau trigger system that could reduce the event rate to a manageable level, while efficiently extracting the most interesting physics events in a pile-up robust manner. In this poster the ATLAS tau trigger is described, its performance during 2012 is presented, and the outlook for the LHC Run II is briefly summarized.

  17. Hypoxia triggers high-altitude headache with migraine features: A prospective trial.

    Science.gov (United States)

    Broessner, Gregor; Rohregger, Johanna; Wille, Maria; Lackner, Peter; Ndayisaba, Jean-Pierre; Burtscher, Martin

    2016-07-01

    Given the high prevalence and clinical impact of high-altitude headache (HAH), a better understanding of risk factors and headache characteristics may give new insights into the understanding of hypoxia being a trigger for HAH or even migraine attacks. In this prospective trial, we simulated high altitude (4500 m) by controlled normobaric hypoxia (FiO2 = 12.6%) to investigate acute mountain sickness (AMS) and headache characteristics. Clinical symptoms of AMS according to the Lake Louise Scoring system (LLS) were recorded before and after six and 12 hours in hypoxia. O2 saturation was measured using pulse oximetry at the respective time points. History of primary headache, especially episodic or chronic migraine, was a strict exclusion criterion. In total 77 volunteers (43 (55.8%) males, 34 (44.2%) females) were enrolled in this study. Sixty-three (81.18%) and 40 (71.4%) participants developed headache at six or 12 hours, respectively, with height and SpO2 being significantly different between headache groups at six hours (p headache development (p headache according to the International Classification of Headache Disorders (ICHD-3 beta) in n = 5 (8%) or n = 6 (15%), at six and 12 hours, respectively. Normobaric hypoxia is a trigger for HAH and migraine-like headache attacks even in healthy volunteers without any history of migraine. Our study confirms the pivotal role of hypoxia in the development of AMS and beyond that suggests hypoxia may be involved in migraine pathophysiology. © International Headache Society 2015.

  18. The design and performance of the ATLAS jet trigger

    International Nuclear Information System (INIS)

    Shimizu, Shima

    2014-01-01

    The ATLAS jet trigger is an important element of the event selection process, providing data samples for studies of Standard Model physics and searches for new physics at the LHC. The ATLAS jet trigger system has undergone substantial modifications over the past few years of LHC operations, as experience developed with triggering in a high luminosity and high event pileup environment. In particular, the region-of-interest based strategy has been replaced by a full scan of the calorimeter data at the third trigger level, and by a full scan of the level-1 trigger input at level-2 for some specific trigger chains. Hadronic calibration and cleaning techniques are applied in order to provide improved performance and increased stability in high luminosity data taking conditions. In this note we discuss the implementation and operational aspects of the ATLAS jet trigger during 2011 and 2012 data taking periods at the LHC.

  19. Performance and reliability of TPE-2 device with pulsed high power source

    International Nuclear Information System (INIS)

    Sato, Y.; Takeda, S.; Kiyama, S.

    1987-01-01

    The performance and the reliability of TPE-2 device with pulsed high power sources are described. To obtain the stable high beta plasma, the reproducibility and the reliability of the pulsed power sources must be maintained. A new power crowbar system with high efficiency and the switches with low jitter time are adopted to the bank system. A monitor system which always watches the operational states of the switches is developed too, and applied for the fast rising capacitor banks of TPE-2 device. The reliable operation for the bank has been realized, based on the data of switch monitor system

  20. TRIGGER

    CERN Multimedia

    W. Smith

    Level-1 Trigger Hardware and Software The road map for the final commissioning of the level-1 trigger system has been set. The software for the trigger subsystems is being upgraded to run under CERN Scientific Linux 4 (SLC4). There is also a new release for the Trigger Supervisor (TS 1.4), which implies upgrade work by the subsystems. As reported by the CERN group, a campaign to tidy the Trigger Timing and Control (TTC) racks has begun. The machine interface was upgraded by installing the new RF2TTC module, which receives RF signals from LHC Point 4. Two Beam Synchronous Timing (BST) signals, one for each beam, can now be received in CMS. The machine group will define the exact format of the information content shortly. The margin on the locking range of the CMS QPLL is planned for study for different subsystems in the next Global Runs, using a function generator. The TTC software has been successfully tested on SLC4. Some TTC subsystems have already been upgraded to SLC4. The TTCci Trigger Supervisor ...

  1. A new high speed, Ultrascale+ based board for the ATLAS jet calorimeter trigger system

    CERN Document Server

    Rocco, Elena; The ATLAS collaboration

    2018-01-01

    A new high speed Ultrascale+ based board for the ATLAS jet calorimeter trigger system To cope with the enhanced luminosity at the Large Hadron Collider (LHC) in 2021, the ATLAS collaboration is planning a major detector upgrade. As a part of this, the Level 1 trigger based on calorimeter data will be upgraded to exploit the fine granularity readout using a new system of Feature EXtractors (FEX), which each reconstruct different physics objects for the trigger selection. The jet FEX (jFEX) system is conceived to provide jet identification (including large area jets) and measurements of global variables within a latency budget of less then 400ns. It consists of 6 modules. A single jFEX module is an ATCA board with 4 large FPGAs of the Xilinx Ultrascale+ family, that can digest a total input data rate of ~3.6 Tb/s using up to 120 Multi Gigabit Transceiver (MGT), 24 electrical optical devices, board control and power on the mezzanines to allow flexibility in upgrading controls functions and components without aff...

  2. Highly reliable TOFD UT Technique

    International Nuclear Information System (INIS)

    Acharya, G.D.; Trivedi, S.A.R.; Pai, K.B.

    2003-01-01

    The high performance of the time of flight diffraction technique (TOFD) with regard to the detection capabilities of weld defects such as crack, slag, lack of fusion has led to a rapidly increasing acceptance of the technique as a pre?service inspection tool. Since the early 1990s TOFD has been applied to several projects, where it replaced the commonly used radiographic testing. The use of TOM lead to major time savings during new build and replacement projects. At the same time the TOFD technique was used as base line inspection, which enables monitoring in the future for critical welds, but also provides documented evidence for life?time. The TOFD technique as the ability to detect and simultaneously size flows of nearly any orientation within the weld and heat affected zone. TOM is recognized as a reliable, proven technique for detection and sizing of defects and proven to be a time saver, resulting in shorter shutdown periods and construction project times. Thus even in cases where inspection price of TOFD per welds is higher, in the end it will result in significantly lower overall costs and improve quality. This paper deals with reliability, economy, acceptance criteria and field experience. It also covers comparative study between radiography technique Vs. TOFD. (Author)

  3. Headache triggers in the US military.

    Science.gov (United States)

    Theeler, Brett J; Kenney, Kimbra; Prokhorenko, Olga A; Fideli, Ulgen S; Campbell, William; Erickson, Jay C

    2010-05-01

    Headaches can be triggered by a variety of factors. Military service members have a high prevalence of headache but the factors triggering headaches in military troops have not been identified. The objective of this study is to determine headache triggers in soldiers and military beneficiaries seeking specialty care for headaches. A total of 172 consecutive US Army soldiers and military dependents (civilians) evaluated at the headache clinics of 2 US Army Medical Centers completed a standardized questionnaire about their headache triggers. A total of 150 (87%) patients were active-duty military members and 22 (13%) patients were civilians. In total, 77% of subjects had migraine; 89% of patients reported at least one headache trigger with a mean of 8.3 triggers per patient. A wide variety of headache triggers was seen with the most common categories being environmental factors (74%), stress (67%), consumption-related factors (60%), and fatigue-related factors (57%). The types of headache triggers identified in active-duty service members were similar to those seen in civilians. Stress-related triggers were significantly more common in soldiers. There were no significant differences in trigger types between soldiers with and without a history of head trauma. Headaches in military service members are triggered mostly by the same factors as in civilians with stress being the most common trigger. Knowledge of headache triggers may be useful for developing strategies that reduce headache occurrence in the military.

  4. Semi-intelligent trigger-generation scheme for Cherenkov light imaging cameras

    International Nuclear Information System (INIS)

    Bhat, C.L.; Tickoo, A.K.; Koul, R.; Kaul, I.K.

    1994-01-01

    We propose here an improved trigger-generation scheme for TeV gamma-ray imaging telescopes. Based on a memory-based Majority Coincidence Circuit, this scheme involves deriving two-or three-pixel nearest-neighbour coincidences as against the conventional approach of generating prompt coincidences using any two photomultiplier detector pixels of an imaging-camera. As such, the new method can discriminate better against shot-noise-generated triggers and, to a significant extent, also against cosmic-ray and local-muon-generated background events, without compromising on the telescope response to events of γ-ray origin. An optional feature of the proposed scheme is that a suitably scaled-up value of the chance-trigger rate can be independently derived, thereby making it possible to use this parameter reliably for keeping a log of the ''health'' of the experimental system. (orig.)

  5. Semi-intelligent trigger-generation scheme for Cherenkov light imaging cameras

    Science.gov (United States)

    Bhat, C. L.; Tickoo, A. K.; Koul, R.; Kaul, I. K.

    1994-02-01

    We propose here an improved trigger-generation scheme for TeV gamma-ray imaging telescopes. Based on a memory-based Majority Coincidence Circuit, this scheme involves deriving two- or three-pixel nearest-neighbour coincidences as against the conventional approach of generating prompt coincidences using any two photomultiplier detector pixels of an imaging-camera. As such, the new method can discriminate better against shot-noise-generated triggers and, to a significant extent, also against cosmic-ray and local-muon-generated background events, without compromising on the telescope response to events of γ-ray origin. An optional feature of the proposed scheme is that a suitably scaled-up value of the chance-trigger rate can be independently derived, thereby making it possible to use this parameter reliably for keeping a log of the ``health'' of the experimental system.

  6. The Berg Balance Scale has high intra- and inter-rater reliability but absolute reliability varies across the scale: a systematic review.

    Science.gov (United States)

    Downs, Stephen; Marquez, Jodie; Chiarelli, Pauline

    2013-06-01

    What is the intra-rater and inter-rater relative reliability of the Berg Balance Scale? What is the absolute reliability of the Berg Balance Scale? Does the absolute reliability of the Berg Balance Scale vary across the scale? Systematic review with meta-analysis of reliability studies. Any clinical population that has undergone assessment with the Berg Balance Scale. Relative intra-rater reliability, relative inter-rater reliability, and absolute reliability. Eleven studies involving 668 participants were included in the review. The relative intrarater reliability of the Berg Balance Scale was high, with a pooled estimate of 0.98 (95% CI 0.97 to 0.99). Relative inter-rater reliability was also high, with a pooled estimate of 0.97 (95% CI 0.96 to 0.98). A ceiling effect of the Berg Balance Scale was evident for some participants. In the analysis of absolute reliability, all of the relevant studies had an average score of 20 or above on the 0 to 56 point Berg Balance Scale. The absolute reliability across this part of the scale, as measured by the minimal detectable change with 95% confidence, varied between 2.8 points and 6.6 points. The Berg Balance Scale has a higher absolute reliability when close to 56 points due to the ceiling effect. We identified no data that estimated the absolute reliability of the Berg Balance Scale among participants with a mean score below 20 out of 56. The Berg Balance Scale has acceptable reliability, although it might not detect modest, clinically important changes in balance in individual subjects. The review was only able to comment on the absolute reliability of the Berg Balance Scale among people with moderately poor to normal balance. Copyright © 2013 Australian Physiotherapy Association. Published by .. All rights reserved.

  7. Electrons and photons at High Level Trigger in CMS for Run II

    CERN Document Server

    Bin Anuar, Afiq Aizuddin

    2015-01-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increase in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. New approaches have been studied to keep the HLT output rate manageable while maintaining thresholds low enough to cover physics analyses. The strategy mainly relies on porting online the ingredients that have been successfully applied in the offline reconstruction, thus allowing to move HLT selection closer to offline cuts. Improvements in HLT electron and photon definitions will be presented, focusing in particular on updated clustering algorithm and the energy calibration procedure, new Particle-Flow-based isolation approach and pileup mitigation techniques, a...

  8. Seeking high reliability in primary care: Leadership, tools, and organization.

    Science.gov (United States)

    Weaver, Robert R

    2015-01-01

    Leaders in health care increasingly recognize that improving health care quality and safety requires developing an organizational culture that fosters high reliability and continuous process improvement. For various reasons, a reliability-seeking culture is lacking in most health care settings. Developing a reliability-seeking culture requires leaders' sustained commitment to reliability principles using key mechanisms to embed those principles widely in the organization. The aim of this study was to examine how key mechanisms used by a primary care practice (PCP) might foster a reliability-seeking, system-oriented organizational culture. A case study approach was used to investigate the PCP's reliability culture. The study examined four cultural artifacts used to embed reliability-seeking principles across the organization: leadership statements, decision support tools, and two organizational processes. To decipher their effects on reliability, the study relied on observations of work patterns and the tools' use, interactions during morning huddles and process improvement meetings, interviews with clinical and office staff, and a "collective mindfulness" questionnaire. The five reliability principles framed the data analysis. Leadership statements articulated principles that oriented the PCP toward a reliability-seeking culture of care. Reliability principles became embedded in the everyday discourse and actions through the use of "problem knowledge coupler" decision support tools and daily "huddles." Practitioners and staff were encouraged to report unexpected events or close calls that arose and which often initiated a formal "process change" used to adjust routines and prevent adverse events from recurring. Activities that foster reliable patient care became part of the taken-for-granted routine at the PCP. The analysis illustrates the role leadership, tools, and organizational processes play in developing and embedding a reliable-seeking culture across an

  9. The Run-2 ATLAS Trigger System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00222798; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in roughly five times higher trigger rates. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event filter farm. A ...

  10. The Database Driven ATLAS Trigger Configuration System

    CERN Document Server

    Martyniuk, Alex; The ATLAS collaboration

    2015-01-01

    This contribution describes the trigger selection configuration system of the ATLAS low- and high-level trigger (HLT) and the upgrades it received in preparation for LHC Run 2. The ATLAS trigger configuration system is responsible for applying the physics selection parameters for the online data taking at both trigger levels and the proper connection of the trigger lines across those levels. Here the low-level trigger consists of the already existing central trigger (CT) and the new Level-1 Topological trigger (L1Topo), which has been added for Run 2. In detail the tasks of the configuration system during the online data taking are Application of the selection criteria, e.g. energy cuts, minimum multiplicities, trigger object correlation, at the three trigger components L1Topo, CT, and HLT On-the-fly, e.g. rate-dependent, generation and application of prescale factors to the CT and HLT to adjust the trigger rates to the data taking conditions, such as falling luminosity or rate spikes in the detector readout ...

  11. Metrological Reliability of Medical Devices

    Science.gov (United States)

    Costa Monteiro, E.; Leon, L. F.

    2015-02-01

    The prominent development of health technologies of the 20th century triggered demands for metrological reliability of physiological measurements comprising physical, chemical and biological quantities, essential to ensure accurate and comparable results of clinical measurements. In the present work, aspects concerning metrological reliability in premarket and postmarket assessments of medical devices are discussed, pointing out challenges to be overcome. In addition, considering the social relevance of the biomeasurements results, Biometrological Principles to be pursued by research and innovation aimed at biomedical applications are proposed, along with the analysis of their contributions to guarantee the innovative health technologies compliance with the main ethical pillars of Bioethics.

  12. Improving patient safety: patient-focused, high-reliability team training.

    Science.gov (United States)

    McKeon, Leslie M; Cunningham, Patricia D; Oswaks, Jill S Detty

    2009-01-01

    Healthcare systems are recognizing "human factor" flaws that result in adverse outcomes. Nurses work around system failures, although increasing healthcare complexity makes this harder to do without risk of error. Aviation and military organizations achieve ultrasafe outcomes through high-reliability practice. We describe how reliability principles were used to teach nurses to improve patient safety at the front line of care. Outcomes include safety-oriented, teamwork communication competency; reflections on safety culture and clinical leadership are discussed.

  13. A High Reliability Frequency Stabilized Semiconductor Laser Source, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Ultrastable, narrow linewidth, high reliability MOPA sources are needed for high performance LIDARs in NASA for, wind speed measurement, surface topography and earth...

  14. The D-Zero Run II Trigger

    International Nuclear Information System (INIS)

    Blazey, G. C.

    1997-01-01

    The general purpose D0 collider detector, located at Fermi National Accelerator Laboratory, requires significantly enhanced data acquisition and triggering to operate in the high luminosity (L = 2 x 10 32 cm -2 s -1 ), high rate environment (7 MHz or 132 ns beam crossings) of the upgraded TeVatron proton anti-proton accelerator. This article describes the three major levels and frameworks of the new trigger. Information from the first trigger stage (L1) which includes scintillating, tracking and calorimeter detectors will provide a deadtimeless, 4.2 (micro)s trigger decision with an accept rate of 10 kHz. The second stage (L2), comprised of hardware engines associated with specific detectors and a single global processor will test for correlations between L1 triggers. L2 will have an accept rate of 1 kHz at a maximum deadtime of 5% and require a 100 (micro)s decision time. The third and final stage (L3) will reconstruct events in a farm of processors for a final instantaneous accept rate of 50 Hz

  15. The CMS trigger in Run 2

    CERN Document Server

    Tosi, Mia

    2018-01-01

    During its second period of operation (Run 2) which started in 2015, the LHC will reach a peak instantaneous luminosity of approximately 2$\\times 10^{34}$~cm$^{-2}s^{-1}$ with an average pile-up of about 55, far larger than the design value. Under these conditions, the online event selection is a very challenging task. In CMS, it is realised by a two-level trigger system: the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm.\\\\ In order to face this challenge, the L1 trigger has undergone a major upgrade compared to Run 1, whereby all electronic boards of the system have been replaced, allowing more sophisticated algorithms to be run online. Its last stage, the global trigger, is now able to perform complex selections and to compute high-level quantities, like invariant masses. Likewise, the algorithms that run in the HLT went through big improvements; in particular, new ap...

  16. The Advanced Gamma-ray Imaging System (AGIS): Topological Array Trigger

    Science.gov (United States)

    Smith, Andrew W.

    2010-03-01

    AGIS is a concept for the next-generation ground-based gamma-ray observatory. It will be an array of 36 imaging atmospheric Cherenkov telescopes (IACTs) sensitive in the energy range from 50 GeV to 200 TeV. The required improvements in sensitivity, angular resolution, and reliability of operation relative to the present generation instruments imposes demanding technological and cost requirements on the design of the telescopes and on the triggering and readout systems for AGIS. To maximize the capabilities of large arrays of IACTs with a low energy threshold, a wide field of view and a low background rate, a sophisticated array trigger is required. We outline the status of the development of a stereoscopic array trigger that calculates image parameters and correlates them across a subset of telescopes. Field Programmable Gate Arrays (FPGAs) implement the real-time pattern recognition to suppress cosmic rays and night-sky background events. A proof of principle system is being developed to run at camera trigger rates up to 10MHz and array-level rates up to 10kHz.

  17. On the design of high-rise buildings with a specified level of reliability

    Science.gov (United States)

    Dolganov, Andrey; Kagan, Pavel

    2018-03-01

    High-rise buildings have a specificity, which significantly distinguishes them from traditional buildings of high-rise and multi-storey buildings. Steel structures in high-rise buildings are advisable to be used in earthquake-proof regions, since steel, due to its plasticity, provides damping of the kinetic energy of seismic impacts. These aspects should be taken into account when choosing a structural scheme of a high-rise building and designing load-bearing structures. Currently, modern regulatory documents do not quantify the reliability of structures. Although the problem of assigning an optimal level of reliability has existed for a long time. The article shows the possibility of designing metal structures of high-rise buildings with specified reliability. Currently, modern regulatory documents do not quantify the reliability of high-rise buildings. Although the problem of assigning an optimal level of reliability has existed for a long time. It is proposed to establish the value of reliability 0.99865 (3σ) for constructions of buildings and structures of a normal level of responsibility in calculations for the first group of limiting states. For increased (construction of high-rise buildings) and reduced levels of responsibility for the provision of load-bearing capacity, it is proposed to assign respectively 0.99997 (4σ) and 0.97725 (2σ). The coefficients of the use of the cross section of a metal beam for different levels of security are given.

  18. Geometrical Acceptance Analysis for RPC PAC Trigger

    CERN Document Server

    Seo, Eunsung

    2010-01-01

    The CMS(Compact Muon Solenoid) is one of the four experiments that will analyze the collision results of the protons accelerated by the Large Hardron Collider(LHC) at CERN(Conseil Europen pour la Recherche Nuclaire). In case of the CMS experiment, the trigger system is divided into two stages : The Level-1 Trigger and High Level Trigger. The RPC(Resistive Plate Chamber) PAC(PAttern Comparator) Trigger system, which is a subject of this thesis, is a part of the Level-1 Muon Trigger System. Main task of the PAC Trigger is to identify muons, measures transverse momenta and select the best muon candidates for each proton bunch collision occurring every 25 ns. To calculate the value of PAC Trigger efficiency for triggerable muon, two terms of different efficiencies are needed ; acceptance efficiency and chamber efficiency. Main goal of the works described in this thesis is obtaining the acceptance efficiency of the PAC Trigger in each logical cone. Acceptance efficiency is a convolution of the chambers geometry an...

  19. A First-Level Muon Trigger Based on the ATLAS Muon Drift Tube Chambers With High Momentum Resolution for LHC Phase II

    CERN Document Server

    Richter, R; The ATLAS collaboration; Ott, S; Kortner, O; Fras, M; Gabrielyan, V; Danielyan, V; Fink, D; Nowak, S; Schwegler, P; Abovyan, S

    2014-01-01

    The Level-1 (L1) trigger for muons with high transverse momentum (pT) in ATLAS is based on chambers with excellent time resolution, able to identify muons coming from a particular beam crossing. These trigger chambers also provide a fast pT-measurement of the muons, the accuracy of the measurement being limited by the moderate spatial resolution of the chambers along the deflecting direction of the magnetic field (eta-coordinate). The higher luminosity foreseen for Phase-II puts stringent limits on the L1 trigger rates, and a way to control these rates would be to improve the spatial resolution of the triggering system, drastically sharpening the turn-on curve of the L1 trigger. To do this, the precision tracking chambers (MDT) can be used in the L1 trigger, provided the corresponding trigger latency is increased as foreseen. The trigger rate reduction is accomplished by strongly decreasing the rate of triggers from muons with pT lower than a predefined threshold (typically 20 GeV), which would otherwise trig...

  20. Note: Triggering behavior of a vacuum arc plasma source

    Energy Technology Data Exchange (ETDEWEB)

    Lan, C. H., E-mail: lanchaohui@163.com; Long, J. D.; Zheng, L.; Dong, P.; Yang, Z.; Li, J.; Wang, T.; He, J. L. [Institute of Fluid Physics, China Academy of Engineering Physics, Mianyang 621900 (China)

    2016-08-15

    Axial symmetry of discharge is very important for application of vacuum arc plasma. It is discovered that the triggering method is a significant factor that would influence the symmetry of arc discharge at the final stable stage. Using high-speed multiframe photography, the transition processes from cathode-trigger discharge to cathode-anode discharge were observed. It is shown that the performances of the two triggering methods investigated are quite different. Arc discharge triggered by independent electric source can be stabilized at the center of anode grid, but it is difficult to achieve such good symmetry through resistance triggering. It is also found that the triggering process is highly correlated to the behavior of emitted electrons.

  1. Reliability-based design optimization via high order response surface method

    International Nuclear Information System (INIS)

    Li, Hong Shuang

    2013-01-01

    To reduce the computational effort of reliability-based design optimization (RBDO), the response surface method (RSM) has been widely used to evaluate reliability constraints. We propose an efficient methodology for solving RBDO problems based on an improved high order response surface method (HORSM) that takes advantage of an efficient sampling method, Hermite polynomials and uncertainty contribution concept to construct a high order response surface function with cross terms for reliability analysis. The sampling method generates supporting points from Gauss-Hermite quadrature points, which can be used to approximate response surface function without cross terms, to identify the highest order of each random variable and to determine the significant variables connected with point estimate method. The cross terms between two significant random variables are added to the response surface function to improve the approximation accuracy. Integrating the nested strategy, the improved HORSM is explored in solving RBDO problems. Additionally, a sampling based reliability sensitivity analysis method is employed to reduce the computational effort further when design variables are distributional parameters of input random variables. The proposed methodology is applied on two test problems to validate its accuracy and efficiency. The proposed methodology is more efficient than first order reliability method based RBDO and Monte Carlo simulation based RBDO, and enables the use of RBDO as a practical design tool.

  2. The DOe Silicon Track Trigger

    International Nuclear Information System (INIS)

    Steinbrueck, Georg

    2003-01-01

    We describe a trigger preprocessor to be used by the DOe experiment for selecting events with tracks from the decay of long-lived particles. This Level 2 impact parameter trigger utilizes information from the Silicon Microstrip Tracker to reconstruct tracks with improved spatial and momentum resolutions compared to those obtained by the Level 1 tracking trigger. It is constructed of VME boards with much of the logic existing in programmable processors. A common motherboard provides the I/O infrastructure and three different daughter boards perform the tasks of identifying the roads from the tracking trigger data, finding the clusters in the roads in the silicon detector, and fitting tracks to the clusters. This approach provides flexibility for the design, testing and maintenance phases of the project. The track parameters are provided to the trigger framework in 25 μs. The effective impact parameter resolution for high-momentum tracks is 35 μm, dominated by the size of the Tevatron beam

  3. Vertex trigger implementation using shared memory technology

    CERN Document Server

    Müller, H

    1998-01-01

    The implementation of a 1 st level vertex trigger for LHC-B is particularly difficult due to the high ( 1 MHz ) input data rate. With ca. 350 silicon hits per event, both the R strips and Phi strips of the detectors produce a total of ca 2 Gbyte/s zero-suppressed da ta.1 note succeeds to the ideas to use R-phi coordinates for fast integer linefinding in programmable hardware, as described in LHB note 97-006. For an implementation we propose a FPGA preprocessing stage operating at 1 MHz with the benefit to substantially reduce the amount of data to be transmitted to the CPUs and to liberate a large fraction of CPU time. Interconnected via 4 Gbit/s SCI technol-ogy 2 , a shared memory system can be built which allows to perform data driven eventbuilding with, or without preprocessing. A fully data driven architecture between source modules and destination memories provides a highly reliable memory-to-memory transfer mechanism of very low latency. The eventbuilding is performed via associating events at the sourc...

  4. Performance of the ATLAS Muon Trigger in Run 2

    CERN Document Server

    Morgenstern, Marcus; The ATLAS collaboration

    2018-01-01

    Events containing muons in the final state are an important signature for many analyses being carried out at the Large Hadron Collider (LHC), including both standard model measurements and searches for new physics. To be able to study such events, it is required to have an efficient and well-understood muon trigger. The ATLAS muon trigger consists of a hardware based system (Level 1), as well as a software based reconstruction (High Level Trigger). Due to high luminosity and pile up conditions in Run 2, several improvements have been implemented to keep the trigger rate low while still maintaining a high efficiency. Some examples of recent improvements include requiring coincidence hits between different layers of the muon spectrometer, improvements for handling overlapping muons, and optimised muon isolation. We will present an overview of how we trigger on muons, recent improvements, and the performance of the muon trigger in Run 2 data.

  5. Four-channel high speed synchronized acquisition multiple trigger storage measurement system

    International Nuclear Information System (INIS)

    Guo Jian; Wang Wenlian; Zhang Zhijie

    2010-01-01

    A new storage measurement system based on the CPLD, MCU and FLASH (large-capacity flash memory) is proposed. The large capacity storage characteristic of the FLASH MEMORY is used to realize multi channel synchronized acquisition and the function of multiple records and read once. The function of multi channel synchronization, high speed data acquisition, the triggering several times, and the adjustability of working parameters expands the application of storage measurement system. The storage measurement system can be used in a variety of pressure and temperature test in explosion field. (authors)

  6. TRIGGER

    CERN Multimedia

    by Wesley Smith

    2011-01-01

    Level-1 Trigger Hardware and Software After the winter shutdown minor hardware problems in several subsystems appeared and were corrected. A reassessment of the overall latency has been made. In the TTC system shorter cables between TTCci and TTCex have been installed, which saved one bunch crossing, but which may have required an adjustment of the RPC timing. In order to tackle Pixel out-of-syncs without influencing other subsystems, a special hardware/firmware re-sync protocol has been introduced in the Global Trigger. The link between the Global Calorimeter Trigger and the Global Trigger with the new optical Global Trigger Interface and optical receiver daughterboards has been successfully tested in the Electronics Integration Centre in building 904. New firmware in the GCT now allows a setting to remove the HF towers from energy sums. The HF sleeves have been replaced, which should lead to reduced rates of anomalous signals, which may allow their inclusion after this is validated. For ECAL, improvements i...

  7. A trigger simulation framework for the ALICE experiment

    International Nuclear Information System (INIS)

    Antinori, F; Carminati, F; Gheata, A; Gheata, M

    2011-01-01

    A realistic simulation of the trigger system in a complex HEP experiment is essential for performing detailed trigger efficiency studies. The ALICE trigger simulation is evolving towards a framework capable of replaying the full trigger chain starting from the input to the individual trigger processors and ending with the decision mechanisms of the ALICE central trigger processor. This paper describes the new ALICE trigger simulation framework that is being tested and deployed. The framework handles details like trigger levels, signal delays and busy signals, implementing the trigger logic via customizable trigger device objects managed by a robust scheduling mechanism. A big advantage is the high flexibility of the framework, which is able to mix together components described with very different levels of detail. The framework is being gradually integrated within the ALICE simulation and reconstruction frameworks.

  8. Highly-reliable laser diodes and modules for spaceborne applications

    Science.gov (United States)

    Deichsel, E.

    2017-11-01

    Laser applications become more and more interesting in contemporary missions such as earth observations or optical communication in space. One of these applications is light detection and ranging (LIDAR), which comprises huge scientific potential in future missions. The Nd:YAG solid-state laser of such a LIDAR system is optically pumped using 808nm emitting pump sources based on semiconductor laser-diodes in quasi-continuous wave (qcw) operation. Therefore reliable and efficient laser diodes with increased output powers are an important requirement for a spaceborne LIDAR-system. In the past, many tests were performed regarding the performance and life-time of such laser-diodes. There were also studies for spaceborne applications, but a test with long operation times at high powers and statistical relevance is pending. Other applications, such as science packages (e.g. Raman-spectroscopy) on planetary rovers require also reliable high-power light sources. Typically fiber-coupled laser diode modules are used for such applications. Besides high reliability and life-time, designs compatible to the harsh environmental conditions must be taken in account. Mechanical loads, such as shock or strong vibration are expected due to take-off or landing procedures. Many temperature cycles with high change rates and differences must be taken in account due to sun-shadow effects in planetary orbits. Cosmic radiation has strong impact on optical components and must also be taken in account. Last, a hermetic sealing must be considered, since vacuum can have disadvantageous effects on optoelectronics components.

  9. Utilizing leadership to achieve high reliability in the delivery of perinatal care

    Directory of Open Access Journals (Sweden)

    Parrotta C

    2012-11-01

    Full Text Available Carmen Parrotta,1 William Riley,1 Les Meredith21School of Public Health, University of Minnesota, Minneapolis, MN, 2Premier Insurance Management Services Inc, Charlotte, NC, USAAbstract: Highly reliable care requires standardization of clinical practices and is a prerequisite for patient safety. However, standardization in complex hospital settings is extremely difficult to attain and health care leaders are challenged to create care delivery processes that ensure patient safety. Moreover, once high reliability is achieved in a hospital unit, it must be maintained to avoid process deterioration. This case study examines an intervention to implement care bundles (a collection of evidence-based practices in four hospitals to achieve standardized care in perinatal units. The results show different patterns in the rate and magnitude of change within the hospitals to achieve high reliability. The study is part of a larger nationwide study of 16 hospitals to improve perinatal safety. Based on the findings, we discuss the role of leadership for implementing and sustaining high reliability to ensure freedom from unintended injury.Keywords: care bundles, evidence-based practice, standardized care, process improvement

  10. To the problem of reliability of high-voltage accelerators for industrial purposes

    International Nuclear Information System (INIS)

    Al'bertinskij, B.I.; Svin'in, M.P.; Tsepakin, S.G.

    1979-01-01

    Statistical data characterizing the reliability of ELECTRON and AVRORA-2 type accelerators are presented. Used as a reliability index was the mean time to failure of the main accelerator units. The analysis of accelerator failures allowed a number of conclusions to be drawn. The high failure rate level is connected with inadequate training of the servicing personnel and a natural period of equipment adjustment. The mathematical analysis of the failure rate showed that the main responsibility for insufficient high reliability rests with selenium diodes which are employed in the high voltage power supply. Substitution of selenium diodes by silicon ones increases time between failures. It is shown that accumulation and processing of operational statistical data will permit more accurate prediction of the reliability of produced high-voltage accelerators, make it possible to cope with the problems of planning optimal, in time, preventive inspections and repair, and to select optimal safety factors and test procedures n time, preventive inspections and repair, and to select optimal safety factors and test procedures n time, prevent

  11. Testing on a Large Scale Running the ATLAS Data Acquisition and High Level Trigger Software on 700 PC Nodes

    CERN Document Server

    Burckhart-Chromek, Doris; Adragna, P; Alexandrov, L; Amorim, A; Armstrong, S; Badescu, E; Baines, J T M; Barros, N; Beck, H P; Bee, C; Blair, R; Bogaerts, J A C; Bold, T; Bosman, M; Caprini, M; Caramarcu, C; Ciobotaru, M; Comune, G; Corso-Radu, A; Cranfield, R; Crone, G; Dawson, J; Della Pietra, M; Di Mattia, A; Dobinson, Robert W; Dobson, M; Dos Anjos, A; Dotti, A; Drake, G; Ellis, Nick; Ermoline, Y; Ertorer, E; Falciano, S; Ferrari, R; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Garitaonandia, H; Gaudio, G; George, S; Gesualdi-Mello, A; Gorini, B; Green, B; Haas, S; Haberichter, W N; Hadavand, H; Haeberli, C; Haller, J; Hansen, J; Hauser, R; Hillier, S J; Höcker, A; Hughes-Jones, R E; Joos, M; Kazarov, A; Kieft, G; Klous, S; Kohno, T; Kolos, S; Korcyl, K; Kordas, K; Kotov, V; Kugel, A; Landon, M; Lankford, A; Leahu, L; Leahu, M; Lehmann-Miotto, G; Le Vine, M J; Liu, W; Maeno, T; Männer, R; Mapelli, L; Martin, B; Masik, J; McLaren, R; Meessen, C; Meirosu, C; Mineev, M; Misiejuk, A; Morettini, P; Mornacchi, G; Müller, M; Garcia-Murillo, R; Nagasaka, Y; Negri, A; Padilla, C; Pasqualucci, E; Pauly, T; Perera, V; Petersen, J; Pope, B; Albuquerque-Portes, M; Pretzl, K; Prigent, D; Roda, C; Ryabov, Yu; Salvatore, D; Schiavi, C; Schlereth, J L; Scholtes, I; Sole-Segura, E; Seixas, M; Sloper, J; Soloviev, I; Spiwoks, R; Stamen, R; Stancu, S; Strong, S; Sushkov, S; Szymocha, T; Tapprogge, S; Teixeira-Dias, P; Torres, R; Touchard, F; Tremblet, L; Ünel, G; Van Wasen, J; Vandelli, W; Vaz-Gil-Lopes, L; Vermeulen, J C; von der Schmitt, H; Wengler, T; Werner, P; Wheeler, S; Wickens, F; Wiedenmann, W; Wiesmann, M; Wu, X; Yasu, Y; Yu, M; Zema, F; Zobernig, H; Computing In High Energy and Nuclear Physics

    2006-01-01

    The ATLAS Data Acquisition (DAQ) and High Level Trigger (HLT) software system will be comprised initially of 2000 PC nodes which take part in the control, event readout, second level trigger and event filter operations. This high number of PCs will only be purchased before data taking in 2007. The large CERN IT LXBATCH facility provided the opportunity to run in July 2005 online functionality tests over a period of 5 weeks on a stepwise increasing farm size from 100 up to 700 PC dual nodes. The interplay between the control and monitoring software with the event readout, event building and the trigger software has been exercised the first time as an integrated system on this large scale. New was also to run algorithms in the online environment for the trigger selection and in the event filter processing tasks on a larger scale. A mechanism has been developed to package the offline software together with the DAQ/HLT software and to distribute it via peer-to-peer software efficiently to this large pc cluster. T...

  12. Testing on a Large Scale running the ATLAS Data Acquisition and High Level Trigger Software on 700 PC Nodes

    CERN Document Server

    Burckhart-Chromek, Doris; Adragna, P; Albuquerque-Portes, M; Alexandrov, L; Amorim, A; Armstrong, S; Badescu, E; Baines, J T M; Barros, N; Beck, H P; Bee, C; Blair, R; Bogaerts, J A C; Bold, T; Bosman, M; Caprini, M; Caramarcu, C; Ciobotaru, M; Comune, G; Corso-Radu, A; Cranfield, R; Crone, G; Dawson, J; Della Pietra, M; Di Mattia, A; Dobinson, Robert W; Dobson, M; Dos Anjos, A; Dotti, A; Drake, G; Ellis, Nick; Ermoline, Y; Ertorer, E; Falciano, S; Ferrari, R; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Garcia-Murillo, R; Garitaonandia, H; Gaudio, G; George, S; Gesualdi-Mello, A; Gorini, B; Green, B; Haas, S; Haberichter, W N; Hadavand, H; Haeberli, C; Haller, J; Hansen, J; Hauser, R; Hillier, S J; Hughes-Jones, R E; Höcker, A; Joos, M; Kazarov, A; Kieft, G; Klous, S; Kohno, T; Kolos, S; Korcyl, K; Kordas, K; Kotov, V; Kugel, A; Landon, M; Lankford, A; Le Vine, M J; Leahu, L; Leahu, M; Lehmann-Miotto, G; Liu, W; Maeno, T; Mapelli, L; Martin, B; Masik, J; McLaren, R; Meessen, C; Meirosu, C; Mineev, M; Misiejuk, A; Morettini, P; Mornacchi, G; Männer, R; Müller, M; Nagasaka, Y; Negri, A; Padilla, C; Pasqualucci, E; Pauly, T; Perera, V; Petersen, J; Pope, B; Pretzl, K; Prigent, D; Roda, C; Ryabov, Yu; Salvatore, D; Schiavi, C; Schlereth, J L; Scholtes, I; Seixas, M; Sloper, J; Sole-Segura, E; Soloviev, I; Spiwoks, R; Stamen, R; Stancu, S; Strong, S; Sushkov, S; Szymocha, T; Tapprogge, S; Teixeira-Dias, P; Torres, R; Touchard, F; Tremblet, L; Van Wasen, J; Vandelli, W; Vaz-Gil-Lopes, L; Vermeulen, J C; Wengler, T; Werner, P; Wheeler, S; Wickens, F; Wiedenmann, W; Wiesmann, M; Wu, X; Yasu, Y; Yu, M; Zema, F; Zobernig, H; von der Schmitt, H; Ünel, G; Computing In High Energy and Nuclear Physics

    2006-01-01

    The ATLAS Data Acquisition (DAQ) and High Level Trigger (HLT) software system will be comprised initially of 2000 PC nodes which take part in the control, event readout, second level trigger and event filter operations. This high number of PCs will only be purchased before data taking in 2007. The large CERN IT LXBATCH facility provided the opportunity to run in July 2005 online functionality tests over a period of 5 weeks on a stepwise increasing farm size from 100 up to 700 PC dual nodes. The interplay between the control and monitoring software with the event readout, event building and the trigger software has been exercised the first time as an integrated system on this large scale. New was also to run algorithms in the online environment for the trigger selection and in the event filter processing tasks on a larger scale. A mechanism has been developed to package the offline software together with the DAQ/HLT software and to distribute it via peer-to-peer software efficiently to this large pc cluster. T...

  13. Burst mode trigger of STEREO in situ measurements

    Science.gov (United States)

    Jian, L. K.; Russell, C. T.; Luhmann, J. G.; Curtis, D.; Schroeder, P.

    2013-06-01

    Since the launch of the STEREO spacecraft, the in situ instrument suites have continued to modify their burst mode trigger in order to optimize the collection of high-cadence magnetic field, solar wind, and suprathermal electron data. This report reviews the criteria used for the burst mode trigger and their evolution with time. From 2007 to 2011, the twin STEREO spacecraft observed 236 interplanetary shocks, and 54% of them were captured by the burst mode trigger. The capture rate increased remarkably with time, from 30% in 2007 to 69% in 2011. We evaluate the performance of multiple trigger criteria and investigate why some of the shocks were missed by the trigger. Lessons learned from STEREO are useful for future missions, because the telemetry bandwidth needed to capture the waveforms of high frequency but infrequent events would be unaffordable without an effective burst mode trigger.

  14. Reliability of high power electron accelerators for radiation processing

    Energy Technology Data Exchange (ETDEWEB)

    Zimek, Z. [Department of Radiation Chemistry and Technology, Institute of Nuclear Chemistry and Technology, Warsaw (Poland)

    2011-07-01

    Accelerators applied for radiation processing are installed in industrial facilities where accelerator availability coefficient should be at the level of 95% to fulfill requirements according to industry standards. Usually the exploitation of electron accelerator reviles the number of short and few long lasting failures. Some technical shortages can be overcome by practical implementation the experience gained in accelerator technology development by different accelerator manufactures. The reliability/availability of high power accelerators for application in flue gas treatment process must be dramatically improved to meet industrial standards. Support of accelerator technology dedicated for environment protection should be provided by governmental and international institutions to overcome accelerator reliability/availability problem and high risk and low direct profit in this particular application. (author)

  15. Reliability of high power electron accelerators for radiation processing

    International Nuclear Information System (INIS)

    Zimek, Z.

    2011-01-01

    Accelerators applied for radiation processing are installed in industrial facilities where accelerator availability coefficient should be at the level of 95% to fulfill requirements according to industry standards. Usually the exploitation of electron accelerator reviles the number of short and few long lasting failures. Some technical shortages can be overcome by practical implementation the experience gained in accelerator technology development by different accelerator manufactures. The reliability/availability of high power accelerators for application in flue gas treatment process must be dramatically improved to meet industrial standards. Support of accelerator technology dedicated for environment protection should be provided by governmental and international institutions to overcome accelerator reliability/availability problem and high risk and low direct profit in this particular application. (author)

  16. The Run-2 ATLAS Trigger System

    CERN Document Server

    Ruiz-Martinez, Aranzazu; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV resulting in roughly five times higher trigger rates. We will briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter and muon trigger systems, the introduction of a new L1 topological trigger module and the merging of the previously two-level HLT system into a single event filter farm. At hand of a few examples, we will show the ...

  17. Failure mechanism dependence and reliability evaluation of non-repairable system

    International Nuclear Information System (INIS)

    Chen, Ying; Yang, Liu; Ye, Cui; Kang, Rui

    2015-01-01

    Reliability study of electronic system with the physics-of-failure method has been promoted due to the increase knowledge of electronic failure mechanisms. System failure initiates from independent failure mechanisms, have effect on or affect by other failure mechanisms and finally result in system failure. Failure mechanisms in a non-repairable system have many kinds of correlation. One failure mechanism developing to a certain degree will trigger, accelerate or inhibit another or many other failure mechanisms, some kind of failure mechanisms may have the same effect on the failure site, component or system. The destructive effect will be accumulated and result in early failure. This paper presents a reliability evaluation method considering correlativity among failure mechanisms, which includes trigger, acceleration, inhibition, accumulation, and competition. Based on fundamental rule of physics of failure, decoupling methods of these correlations are discussed. With a case, reliability of electronic system is evaluated considering failure mechanism dependence. - Highlights: • Five types of failure mechanism correlations are described. • Decoupling methods of these correlations are discussed. • A reliability evaluation method considering mechanism dependence is proposed. • Results are quite different to results under failure independence assumption

  18. Development of a level-1 trigger and timing system for the Double Chooz neutrino experiment

    International Nuclear Information System (INIS)

    Reinhold, Bernd

    2009-01-01

    The measurement of the mixing angle θ 13 is the goal of several running and planned experiments. The experiments are either accelerator based (super)beam experiments (e.g. MINOS, T2K, Nova) or reactor anti-neutrino disappearance experiments (e.g. Daya Bay, RENO or Double Chooz). In order to measure or constrain θ 13 with the Double Chooz experiment the overall systematic errors have to be controlled at the one-percent or sub-percent level. The limitation of the systematic errors is achieved through various means and techniques. E.g. the experiment consists of two identical detectors at different baselines, which allow to make a differential anti-neutrino flux measurement, where basically only relative normalisation errors remain. The requirements on the systematic errors put also strong constraints on the quality of all components and materials used for both detectors, most prominently on the stability and radiopurity of the scintillator, the photomultiplier tubes, the vessels containing the detector liquids and the shielding against ambient radioactivity. The readout electronics, trigger and data acquisition system have to operate reliably as an integrated and highly efficient whole over several years. The trigger is provided by the Level-1 Trigger and Timing System, which is the subject of this thesis. It has to provide a highly efficient trigger (at the 0.1% level) for neutrino-induced events as well as for several types of background events. Its decision is realized in hardware and based on energy depositions in the muon veto and the target region. The Level-1 Trigger and Timing System furthermore provides a common System Clock and an absolute timestamp for each event. The Level-1 Trigger and Timing System consists of two types of VME modules, several Trigger Boards and a Trigger Master Board, which have been custom-designed and developed in the electronics workshop of our institute for this experiment and purpose, starting in 2005. In this thesis all

  19. Development of a level-1 trigger and timing system for the Double Chooz neutrino experiment

    Energy Technology Data Exchange (ETDEWEB)

    Reinhold, Bernd

    2009-02-25

    The measurement of the mixing angle {theta}{sub 13} is the goal of several running and planned experiments. The experiments are either accelerator based (super)beam experiments (e.g. MINOS, T2K, Nova) or reactor anti-neutrino disappearance experiments (e.g. Daya Bay, RENO or Double Chooz). In order to measure or constrain {theta}{sub 13} with the Double Chooz experiment the overall systematic errors have to be controlled at the one-percent or sub-percent level. The limitation of the systematic errors is achieved through various means and techniques. E.g. the experiment consists of two identical detectors at different baselines, which allow to make a differential anti-neutrino flux measurement, where basically only relative normalisation errors remain. The requirements on the systematic errors put also strong constraints on the quality of all components and materials used for both detectors, most prominently on the stability and radiopurity of the scintillator, the photomultiplier tubes, the vessels containing the detector liquids and the shielding against ambient radioactivity. The readout electronics, trigger and data acquisition system have to operate reliably as an integrated and highly efficient whole over several years. The trigger is provided by the Level-1 Trigger and Timing System, which is the subject of this thesis. It has to provide a highly efficient trigger (at the 0.1% level) for neutrino-induced events as well as for several types of background events. Its decision is realized in hardware and based on energy depositions in the muon veto and the target region. The Level-1 Trigger and Timing System furthermore provides a common System Clock and an absolute timestamp for each event. The Level-1 Trigger and Timing System consists of two types of VME modules, several Trigger Boards and a Trigger Master Board, which have been custom-designed and developed in the electronics workshop of our institute for this experiment and purpose, starting in 2005. In

  20. High-reliability, 4. pi. -scan, leakage-x-ray dosimeter

    Energy Technology Data Exchange (ETDEWEB)

    Kaneko, T; Iida, H; Yoshida, T; Sugimoto, H [Tokyo Shibaura Electric Co. Ltd., Kawasaki, Kanagawa (Japan). Tamagawa Works

    1978-04-01

    A world-wide movement is growing for the protection of living bodies against leakage radiations. In Japan, detailed regulations have been established for the enforcement of the law in regard to this problem. The substances of the measurement provided in the regulations are extremely diversified, much affecting the reliability and the economic efficiency of the equipment. Now a new 4..pi..-scan X-ray dosimeter with high reliability has been developed and proved to effect qualitative improvement of measurement as well as elevation of productivity.

  1. Modeling high-Power Accelerators Reliability-SNS LINAC (SNS-ORNL); MAX LINAC (MYRRHA)

    International Nuclear Information System (INIS)

    Pitigoi, A. E.; Fernandez Ramos, P.

    2013-01-01

    Improving reliability has recently become a very important objective in the field of particle accelerators. The particle accelerators in operation are constantly undergoing modifications, and improvements are implemented using new technologies, more reliable components or redundant schemes (to obtain more reliability, strength, more power, etc.) A reliability model of SNS (Spallation Neutron Source) LINAC has been developed within MAX project and analysis of the accelerator systems reliability has been performed within the MAX project, using the Risk Spectrum reliability analysis software. The analysis results have been evaluated by comparison with the SNS operational data. Results and conclusions are presented in this paper, oriented to identify design weaknesses and provide recommendations for improving reliability of MYRRHA linear accelerator. The SNS reliability model developed for the MAX preliminary design phase indicates possible avenues for further investigation that could be needed to improve the reliability of the high-power accelerators, in view of the future reliability targets of ADS accelerators.

  2. Breakover mechanism of GaAs photoconductive switch triggering spark gap for high power applications

    Science.gov (United States)

    Tian, Liqiang; Shi, Wei; Feng, Qingqing

    2011-11-01

    A spark gap (SG) triggered by a semi-insulating GaAs photoconductive semiconductor switch (PCSS) is presented. Currents as high as 5.6 kA have been generated using the combined switch, which is excited by a laser pulse with energy of 1.8 mJ and under a bias of 4 kV. Based on the transferred-electron effect and gas streamer theory, the breakover characteristics of the combined switch are analyzed. The photoexcited carrier density in the PCSS is calculated. The calculation and analysis indicate that the PCSS breakover is caused by nucleation of the photoactivated avalanching charge domain. It is shown that the high output current is generated by the discharge of a high-energy gas streamer induced by the strong local electric field distortion or by overvoltage of the SG resulting from quenching of the avalanching domain, and periodic oscillation of the current is caused by interaction between the gas streamer and the charge domain. The cycle of the current oscillation is determined by the rise time of the triggering electric pulse generated by the PCSS, the pulse transmission time between the PCSS and the SG, and the streamer transit time in the SG.

  3. Development status of triggered vacuum switches at All-Russian Electrotechnical Institute and prospects of its applications

    International Nuclear Information System (INIS)

    Alfverov, D.F.; Vozdvienskij, V.A.; Sidorov, V.A.

    1996-01-01

    The sealed-off triggered vacuum switches (TVS) find their application in even broader class of high-voltage high-power high-repetition rate energy storage systems. They can operate in a broad range of voltages (1-100 kV) and currents (1 A - 200 kA). Further increase of the limit switching current up to 500 kA and of the transferred charge over 100 As seems feasible. TVS are popular for their compactness and reliability. Main parameters and possibilities of applications of a number of TVS types developed in the All-Russian Electrotechnical Institute in Moscow are described in the paper. (J.U.). 1 tab., 13 refs

  4. Development status of triggered vacuum switches at All-Russian Electrotechnical Institute and prospects of its applications

    Energy Technology Data Exchange (ETDEWEB)

    Alfverov, D F; Vozdvienskij, V A; Sidorov, V A [All-Russian Electrotechnical Institute, Moscow (Russian Federation)

    1997-12-31

    The sealed-off triggered vacuum switches (TVS) find their application in even broader class of high-voltage high-power high-repetition rate energy storage systems. They can operate in a broad range of voltages (1-100 kV) and currents (1 A - 200 kA). Further increase of the limit switching current up to 500 kA and of the transferred charge over 100 As seems feasible. TVS are popular for their compactness and reliability. Main parameters and possibilities of applications of a number of TVS types developed in the All-Russian Electrotechnical Institute in Moscow are described in the paper. (J.U.). 1 tab., 13 refs.

  5. Validity and Reliability of the Academic Resilience Scale in Turkish High School

    Science.gov (United States)

    Kapikiran, Sahin

    2012-01-01

    The present study aims to determine the validity and reliability of the academic resilience scale in Turkish high school. The participances of the study includes 378 high school students in total (192 female and 186 male). A set of analyses were conducted in order to determine the validity and reliability of the study. Firstly, both exploratory…

  6. Software trigger for the TOPAZ detector at TRISTAN

    International Nuclear Information System (INIS)

    Tsukamoto, T.; Yamauchi, M.; Enomoto, R.

    1990-01-01

    A new software trigger system was developed and installed at the TOPAZ detector to the trigger system for the TRISTAN e + e - collider to take data efficiently in the scheduled high luminosity experiment. This software trigger requires two or more charged tracks originated at the interaction point by examining the timing of signals from the time projection chamber. To execute the vertex finding very quickly, four microprocessors are used in parallel. By this new trigger the rate of the track trigger was reduced down to 30-40% with very small inefficiency. The additional dead time by this trigger is negligible. (orig.)

  7. Trigger Algorithms and Electronics for the ATLAS Muon NSW Upgrade

    CERN Document Server

    Guan, Liang; The ATLAS collaboration

    2015-01-01

    The ATLAS New Small Wheel (NSW), comprising MicroMegas (MMs) and small-strip Thin Gap Chambers (sTGCs), will upgrade the ATLAS muon system for a high background environment. Particularly, the NSW trigger will reduce the rate of fake triggers coming from background tracks in the endcap. We will present an overview of the FPGA-based trigger processor for NSW and trigger algorithms for sTGC and Micromegas detector sub systems. In additional, we will present development of NSW trigger electronics, in particular, the sTGC Trigger Data Serializer (TDS) ASIC, sTGC Pad Trigger board, the sTGC data packet router and L1 Data Driver Card. Finally, we will detail the challenges of meeting the low latency requirements of the trigger system and coping with the high background rates of the HL-LHC.

  8. Pulling the trigger on LHC electronics

    CERN Document Server

    CERN. Geneva

    2001-01-01

    The conditions at CERN's Large Hadron Collider pose severe challenges for the designers and builders of front-end, trigger and data acquisition electronics. A recent workshop reviewed the encouraging progress so far and discussed what remains to be done. The LHC experiments have addressed level one trigger systems with a variety of high-speed hardware. The CMS Calorimeter Level One Regional Trigger uses 160 MHz logic boards plugged into the front and back of a custom backplane, which provides point-to-point links between the cards. Much of the processing in this system is performed by five types of 160 MHz digital applications-specific integrated circuits designed using Vitesse submicron high-integration gallium arsenide gate array technology. The LHC experiments make extensive use of field programmable gate arrays (FPGAs). These offer programmable reconfigurable logic, which has the flexibility that trigger designers need to be able to alter algorithms so that they can follow the physics and detector perform...

  9. Survey of industry methods for producing highly reliable software

    International Nuclear Information System (INIS)

    Lawrence, J.D.; Persons, W.L.

    1994-11-01

    The Nuclear Reactor Regulation Office of the US Nuclear Regulatory Commission is charged with assessing the safety of new instrument and control designs for nuclear power plants which may use computer-based reactor protection systems. Lawrence Livermore National Laboratory has evaluated the latest techniques in software reliability for measurement, estimation, error detection, and prediction that can be used during the software life cycle as a means of risk assessment for reactor protection systems. One aspect of this task has been a survey of the software industry to collect information to help identify the design factors used to improve the reliability and safety of software. The intent was to discover what practices really work in industry and what design factors are used by industry to achieve highly reliable software. The results of the survey are documented in this report. Three companies participated in the survey: Computer Sciences Corporation, International Business Machines (Federal Systems Company), and TRW. Discussions were also held with NASA Software Engineering Lab/University of Maryland/CSC, and the AIAA Software Reliability Project

  10. Test-Retest Reliability of an Experienced Global Trigger Tool Review Team

    DEFF Research Database (Denmark)

    Bjørn, Brian; Anhøj, Jacob; Østergaard, Mette

    2018-01-01

    and review 2 and between period 1 and period 2. The increase was solely in category E, minor temporary harm. CONCLUSIONS: The very experienced GTT team could not reproduce harm rates found in earlier reviews. We conclude that GTT in its present form is not a reliable measure of harm rate over time....

  11. The second level trigger system of FAST

    CERN Document Server

    Martínez,G; Berdugo, J; Casaus, J; Casella, V; De Laere, D; Deiters, K; Dick, P; Kirkby, J; Malgeri, L; Mañá, C; Marín, J; Pohl, M; Petitjean, C; Sánchez, E; Willmott, C

    2009-01-01

    The Fibre Active Scintillator Target (FAST) experiment is a novel imaging particle detector currently operating in a high-intensity π+ beam at the Paul Scherrer Institute (PSI), Villigen, Switzerland. The detector is designed to perform a high precision measurement of the μ+ lifetime, in order to determine the Fermi constant, Gf, to 1 ppm precision. A dedicated second level (LV2) hardware trigger system has been developed for the experiment. It performs an online analysis of the π/μ decay chain by identifying the stopping position of each beam particle and detecting the subsequent appearance of the muon. The LV2 trigger then records the muon stop pixel and selectively triggers the Time-to-Digital Converters (TDCs) in the vicinity. A detailed description of the trigger system is presented in this paper.

  12. Engineering high reliability, low-jitter Marx generators

    International Nuclear Information System (INIS)

    Schneider, L.X.; Lockwood, G.J.

    1985-01-01

    Multimodule pulsed power accelerators typically require high module reliability and nanosecond regime simultaneity between modules. Energy storage using bipolar Marx generators can meet these requirements. Experience gained from computer simulations and the development of the DEMON II Marx generator has led to a fundamental understanding of the operation of these multistage devices. As a result of this research, significant improvements in erection time jitter and reliability have been realized in multistage, bipolar Marx generators. Erection time jitter has been measured as low as 2.5 nanoseconds for the 3.2MV, 16-stage PBFA I Marx and 3.5 nanoseconds for the 6.0MV, 30-stage PBFA II (DEMON II) Marx, while maintaining exceptionally low prefire rates. Performance data are presented from the DEMON II Marx research program, as well as discussions on the use of computer simulations in designing low-jitter Marx generators

  13. Instrument reliability for high-level nuclear-waste-repository applications

    International Nuclear Information System (INIS)

    Rogue, F.; Binnall, E.P.; Armantrout, G.A.

    1983-01-01

    Reliable instrumentation will be needed to evaluate the characteristics of proposed high-level nuclear-wasted-repository sites and to monitor the performance of selected sites during the operational period and into repository closure. A study has been done to assess the reliability of instruments used in Department of Energy (DOE) waste repository related experiments and in other similar geological applications. The study included experiences with geotechnical, hydrological, geochemical, environmental, and radiological instrumentation and associated data acquisition equipment. Though this paper includes some findings on the reliability of instruments in each of these categories, the emphasis is on experiences with geotechnical instrumentation in hostile repository-type environments. We review the failure modes, rates, and mechanisms, along with manufacturers modifications and design changes to enhance and improve instrument performance; and include recommendations on areas where further improvements are needed

  14. Implementation of a level 1 trigger system using high speed serial (VXS) techniques for the 12GeV high luminosity experimental programs at Thomas Jefferson National Accelerator Facility

    International Nuclear Information System (INIS)

    Cuevas, C.; Raydo, B.; Dong, H.; Gupta, A.; Barbosa, F.J.; Wilson, J.; Taylor, W.M.; Jastrzembski, E.; Abbott, D.

    2009-01-01

    We will demonstrate a hardware and firmware solution for a complete fully pipelined multi-crate trigger system that takes advantage of the elegant high speed VXS serial extensions for VME. This trigger system includes three sections starting with the front end crate trigger processor (CTP), a global Sub-System Processor (SSP) and a Trigger Supervisor that manages the timing, synchronization and front end event readout. Within a front end crate, trigger information is gathered from each 16 Channel, 12 bit Flash ADC module at 4 nS intervals via the VXS backplane, to a Crate Trigger Processor (CTP). Each Crate Trigger Processor receives these 500 MB/S VXS links from the 16 FADC-250 modules, aligns skewed data inherent of Aurora protocol, and performs real time crate level trigger algorithms. The algorithm results are encoded using a Reed-Solomon technique and transmission of this Level 1 trigger data is sent to the SSP using a multi-fiber link. The multi-fiber link achieves an aggregate trigger data transfer rate to the global trigger at 8 Gb/s. The SSP receives and decodes Reed-Solomon error correcting transmission from each crate, aligns the data, and performs the global level trigger algorithms. The entire trigger system is synchronous and operates at 250 MHz with the Trigger Supervisor managing not only the front end event readout, but also the distribution of the critical timing clocks, synchronization signals, and the global trigger signals to each front end readout crate. These signals are distributed to the front end crates on a separate fiber link and each crate is synchronized using a unique encoding scheme to guarantee that each front end crate is synchronous with a fixed latency, independent of the distance between each crate. The overall trigger signal latency is <3 uS, and the proposed 12GeV experiments at Jefferson Lab require up to 200KHz Level 1 trigger rate.

  15. The role of high cycle fatigue (HCF) onset in Francis runner reliability

    International Nuclear Information System (INIS)

    Gagnon, M; Tahan, S A; Bocher, P; Thibault, D

    2012-01-01

    High Cycle Fatigue (HCF) plays an important role in Francis runner reliability. This paper presents a model in which reliability is defined as the probability of not exceeding a threshold above which HCF contributes to crack propagation. In the context of combined Low Cycle Fatigue (LCF) and HCF loading, the Kitagawa diagram is used as the limit state threshold for reliability. The reliability problem is solved using First-Order Reliability Methods (FORM). A study case is proposed using in situ measured strains and operational data. All the parameters of the reliability problem are based either on observed data or on typical design specifications. From the results obtained, we observed that the uncertainty around the defect size and the HCF stress range play an important role in reliability. At the same time, we observed that expected values for the LCF stress range and the number of LCF cycles have a significant influence on life assessment, but the uncertainty around these values could be neglected in the reliability assessment.

  16. Electronics and triggering challenges for the CMS High Granularity Calorimeter

    Science.gov (United States)

    Lobanov, A.

    2018-02-01

    The High Granularity Calorimeter (HGCAL), presently being designed by the CMS collaboration to replace the CMS endcap calorimeters for the High Luminosity phase of LHC, will feature six million channels distributed over 52 longitudinal layers. The requirements for the front-end electronics are extremely challenging, including high dynamic range (0.2 fC-10 pC), low noise (~2000 e- to be able to calibrate on single minimum ionising particles throughout the detector lifetime) and low power consumption (~20 mW/channel), as well as the need to select and transmit trigger information with a high granularity. Exploiting the intrinsic precision-timing capabilities of silicon sensors also requires careful design of the front-end electronics as well as the whole system, particularly clock distribution. The harsh radiation environment and requirement to keep the whole detector as dense as possible will require novel solutions to the on-detector electronics layout. Processing the data from the HGCAL imposes equally large challenges on the off-detector electronics, both for the hardware and incorporated algorithms. We present an overview of the complete electronics architecture, as well as the performance of prototype components and algorithms.

  17. The second level trigger system of FAST

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, G. [CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: gustavo.martinez@ciemat.es; Barcyzk, A. [CERN, CH-1211 Geneva 23 (Switzerland); Berdugo, J.; Casaus, J. [CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain); Casella, C.; De Laere, S. [Universite de Geneve, 30 quai Ernest-Anserment, CH-1211 Geneva 4 (Switzerland); Deiters, K.; Dick, P. [Paul Scherrer Institut, 5232 Villigen PSI (Switzerland); Kirkby, J.; Malgeri, L. [CERN, CH-1211 Geneva 23 (Switzerland); Mana, C.; Marin, J. [CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain); Pohl, M. [Universite de Geneve, 30 quai Ernest-Anserment, CH-1211 Geneva 4 (Switzerland); Petitjean, C. [Paul Scherrer Institut, 5232 Villigen PSI (Switzerland); Sanchez, E.; Willmott, C. [CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)

    2009-10-11

    The Fibre Active Scintillator Target (FAST) experiment is a novel imaging particle detector currently operating in a high-intensity {pi}{sup +} beam at the Paul Scherrer Institute (PSI), Villigen, Switzerland. The detector is designed to perform a high precision measurement of the {mu}{sup +} lifetime, in order to determine the Fermi constant, G{sub f}, to 1 ppm precision. A dedicated second level (LV2) hardware trigger system has been developed for the experiment. It performs an online analysis of the {pi}/{mu} decay chain by identifying the stopping position of each beam particle and detecting the subsequent appearance of the muon. The LV2 trigger then records the muon stop pixel and selectively triggers the Time-to-Digital Converters (TDCs) in the vicinity. A detailed description of the trigger system is presented in this paper.

  18. Mark-II Data Acquisition and Trigger system

    International Nuclear Information System (INIS)

    Breidenbach, M.

    1984-06-01

    The Mark-II Data Acquisition and Trigger system requirements and general solution are described. The solution takes advantage of the synchronous crossing times and low event rates of an electron positron collider to permit a very highly multiplexed analog scheme to be effective. The system depends on a two level trigger to operate with acceptable dead time. The trigger, multiplexing, data reduction, calibration, and CAMAC systems are described

  19. Efficiency criteria for high reliability measured system structures

    International Nuclear Information System (INIS)

    Sal'nikov, N.L.

    2012-01-01

    The procedures of structural redundancy are usually used to develop high reliability measured systems. To estimate efficiency of such structures the criteria to compare different systems has been developed. So it is possible to develop more exact system by inspection of redundant system data unit stochastic characteristics in accordance with the developed criteria [ru

  20. Designing reliability into high-effectiveness industrial gas turbine regenerators

    International Nuclear Information System (INIS)

    Valentino, S.J.

    1979-01-01

    The paper addresses the measures necessary to achieve a reliable regenerator design that can withstand higher temperatures (1000-1200 F) and many start and stop cycles - conditions encountered in high-efficiency operation in pipeline applications. The discussion is limited to three major areas: (1) structural analysis of the heat exchanger core - the part of the regenerator that must withstand the higher temperatures and cyclic duty (2) materials data and material selection and (3) a comprehensive test program to demonstrate the reliability of the regenerator. This program includes life-cycle tests, pressure containment in fin panels, core-to-core joint structural test, bellows pressure containment test, sliding pad test, core gas-side passage flow distribution test, and production test. Today's regenerators must have high cyclic life capability, stainless steel construction, and long fault-free service life of 120,000 hr

  1. Hierarchical trigger of the ALICE calorimeters

    CERN Document Server

    Muller, Hans; Novitzky, Norbert; Kral, Jiri; Rak, Jan; Schambach, Joachim; Wang, Ya-Ping; Wang, Dong; Zhou, Daicui

    2010-01-01

    The trigger of the ALICE electromagnetic calorimeters is implemented in 2 hierarchically connected layers of electronics. In the lower layer, level-0 algorithms search shower energy above threshold in locally confined Trigger Region Units (TRU). The top layer is implemented as a single, global trigger unit that receives the trigger data from all TRUs as input to the level-1 algorithm. This architecture was first developed for the PHOS high pT photon trigger before it was adopted by EMCal also for the jet trigger. TRU units digitize up to 112 analogue input signals from the Front End Electronics (FEE) and concentrate their digital stream in a single FPGA. A charge and time summing algorithm is combined with a peakfinder that suppresses spurious noise and is precise to single LHC bunches. With a peak-to-peak noise level of 150 MeV the linear dynamic range above threshold spans from MIP energies at 215 up to 50 GeV. Local level-0 decisions take less than 600 ns after LHC collisions, upon which all TRUs transfer ...

  2. Level-1 Calorimeter Trigger starts firing

    CERN Multimedia

    Stephen Hillier

    2007-01-01

    L1Calo is one of the major components of ATLAS First Level trigger, along with the Muon Trigger and Central Trigger Processor. It forms all of the first-level calorimeter-based triggers, including electron, jet, tau and missing ET. The final system consists of over 250 custom designed 9U VME boards, most containing a dense array of FPGAs or ASICs. It is subdivided into a PreProcessor, which digitises the incoming trigger signals from the Liquid Argon and Tile calorimeters, and two separate processor systems, which perform the physics algorithms. All of these are highly flexible, allowing the possibility to adapt to beam conditions and luminosity. All parts of the system are read out through Read-Out Drivers, which provide monitoring data and Region of Interest (RoI) information for the Level-2 trigger. Production of the modules is now essentially complete, and enough modules exist to populate the full scale system in USA15. Installation is proceeding rapidly - approximately 90% of the final modules are insta...

  3. Triggering, front-end electronics, and data acquisition for high-rate beauty experiments

    International Nuclear Information System (INIS)

    Johnson, M.; Lankford, A.J.

    1988-04-01

    The working group explored the feasibility of building a trigger and an electronics data acquisition system for both collider and fixed target experiments. There appears to be no fundamental technical limitation arising from either the rate or the amount of data for a collider experiment. The fixed target experiments will likely require a much higher rate because of the smaller cross section. Rates up to one event per RF bucket (50 MHz) appear to be feasible. Higher rates depend on the details of the particular experiment and trigger. Several ideas were presented on multiplicity jump and impact parameter triggers for fixed target experiments. 14 refs., 3 figs

  4. The Run-2 ATLAS Trigger System

    International Nuclear Information System (INIS)

    Martínez, A Ruiz

    2016-01-01

    The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in up to five times higher rates of processes of interest. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event processing farm. A few examples will be shown, such as the impressive performance improvements in the HLT trigger algorithms used to identify leptons, hadrons and global event quantities like missing transverse energy. Finally, the status of the commissioning of the trigger system and its performance during the 2015 run will be presented. (paper)

  5. Trigger and decision processors

    International Nuclear Information System (INIS)

    Franke, G.

    1980-11-01

    In recent years there have been many attempts in high energy physics to make trigger and decision processes faster and more sophisticated. This became necessary due to a permanent increase of the number of sensitive detector elements in wire chambers and calorimeters, and in fact it was possible because of the fast developments in integrated circuits technique. In this paper the present situation will be reviewed. The discussion will be mainly focussed upon event filtering by pure software methods and - rather hardware related - microprogrammable processors as well as random access memory triggers. (orig.)

  6. MRI of ventilated neonates and infants: respiratory pressure as trigger signal

    International Nuclear Information System (INIS)

    Lotz, J.; Reiffen, H.P.

    2004-01-01

    Introduction: motivated by the difficulties often encountered in the setup of respiratory trigger in MR imaging of mechanical ventilated pediatric patients, a simplified approach in terms of time and reliability was sought. Method: with the help of a male-to-male Luer-Lock adapter in combination with a 3-way adapter the tube of the respiratory compensation bellow was fixed to the output channel for capnography of the airway filter. Ten patients (age 4 months to 6 years) were tested with spin echo imaging and either respiration compensation (T1-weighted imaging) or respiratory triggered (T2-weighted imaging). Results: a clear trigger signal was achieved in all cases. No negative influence on the quality or security of the mechanical ventilation of the patients was observed. Summary: the proposed adapter is safe, efficient and fast to install in patients undergoing MR imaging in general anaesthesia. (orig.) [de

  7. ATLAS FTK: Fast Track Trigger

    CERN Document Server

    Volpi, Guido; The ATLAS collaboration

    2015-01-01

    An overview of the ATLAS Fast Tracker processor is presented, reporting the design of the system, its expected performance, and the integration status. The next LHC runs, with a significant increase in instantaneous luminosity, will provide a big challenge to the trigger and data acquisition systems of all the experiments. An intensive use of the tracking information at the trigger level will be important to keep high efficiency in interesting events, despite the increase in multiple p-p collisions per bunch crossing (pile-up). In order to increase the use of tracks within the High Level Trigger (HLT), the ATLAS experiment planned the installation of an hardware processor dedicated to tracking: the Fast TracKer (FTK) processor. The FTK is designed to perform full scan track reconstruction at every Level-1 accept. To achieve this goal, the FTK uses a fully parallel architecture, with algorithms designed to exploit the computing power of custom VLSI chips, the Associative Memory, as well as modern FPGAs. The FT...

  8. Triggering on electrons and photons with CMS

    Directory of Open Access Journals (Sweden)

    Zabi Alexandre

    2012-06-01

    Full Text Available Throughout the year 2011, the Large Hadron Collider (LHC has operated with an instantaneous luminosity that has risen continually to around 4 × 1033cm−2s−1. With this prodigious high-energy proton collisions rate, efficient triggering on electrons and photons has become a major challenge for the LHC experiments. The Compact Muon Solenoid (CMS experiment implements a sophisticated two-level online selection system that achieves a rejection factor of nearly 106. The first level (L1 is based on coarse information coming from the calorimeters and the muon detectors while the High-Level Trigger (HLT combines fine-grain information from all sub-detectors. In this intense hadronic environment, the L1 electron/photon trigger provides a powerful tool to select interesting events. It is based upon information from the Electromagnetic Calorimeter (ECAL, a high-resolution detector comprising 75848 lead tungstate (PbWO4 crystals in a “barrel” and two “endcaps”. The performance as well as the optimization of the electron/photon trigger are presented.

  9. Study of a Level-3 Tau Trigger with the Pixel Detector

    CERN Document Server

    Kotlinski, Danek; Nikitenko, Alexander

    2001-01-01

    We present a Monte Carlo study of the performance of a Level-3 Tau trigger based on the Pixel Detector data. The trigger is designed to select of the Higgs bosons decaying into two tau leptons with tau jet(s) in the final state. The proposed trigger is particularly useful as it operates at an early stage of the CMS High Level Trigger system. The performance of the trigger is studied for the most difficult case of high luminosity LHC scenario.

  10. A self seeded first level track trigger for ATLAS

    International Nuclear Information System (INIS)

    Schöning, A

    2012-01-01

    For the planned high luminosity upgrade of the Large Hadron Collider, aiming to increase the instantaneous luminosity to 5 × 10 34 cm −2 s −1 , the implementation of a first level track trigger has been proposed. This trigger could be installed in the year ∼ 2021 along with the complete renewal of the ATLAS inner detector. The fast readout of the hit information from the Inner Detector is considered as the main challenge of such a track trigger. Different concepts for the implementation of a first level trigger are currently studied within the ATLAS collaboration. The so called 'Self Seeded' track trigger concept exploits fast frontend filtering algorithms based on cluster size reconstruction and fast vector tracking to select hits associated to high momentum tracks. Simulation studies have been performed and results on efficiencies, purities and trigger rates are presented for different layouts.

  11. RPC based 5D tracking concept for high multiplicity tracking trigger

    CERN Document Server

    Aielli, G; Cardarelli, R; Di Ciaccio, A; Distante, L; Liberti, B; Paolozzi, L; Pastori, E; Santonico, R

    2018-01-01

    The recently approved High Luminosity LHC project (HL-LHC) and the future col- liders proposals present a challenging experimental scenario, dominated by high pileup, radiation background and a bunch crossing time possibly shorter than 5 ns. This holds as well for muon systems, where RPCs can play a fundamental role in the design of the future experiments. The RPCs, thanks to their high space-time granularity, allows a sparse representation of the particle hits, in a very large parametric space containing, in addition to 3D spatial localization, also the pulse time and width associated to the avalanche charge. This 5D representation of the hits can be exploited to improve the performance of complex detectors such as muon systems and increase the discovery potential of a future experiment, by allowing a better track pileup rejection and sharper momentum resolution, an effective measurement of the particle velocity, to tag and trigger the non- ultrarelativistic particles, and the detection local multiple track ...

  12. Systems reliability in high risk situations

    International Nuclear Information System (INIS)

    Hunns, D.M.

    1974-12-01

    A summary is given of five papers and the discussion of a seminar promoted by the newly-formed National Centre of Systems Reliability. The topics covered include hazard analysis, reliability assessment, and risk assessment in both nuclear and non-nuclear industries. (U.K.)

  13. GPU-based real-time triggering in the NA62 experiment

    CERN Document Server

    Ammendola, R.; Cretaro, P.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P.S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.

    2016-01-01

    Over the last few years the GPGPU (General-Purpose computing on Graphics Processing Units) paradigm represented a remarkable development in the world of computing. Computing for High-Energy Physics is no exception: several works have demonstrated the effectiveness of the integration of GPU-based systems in high level trigger of different experiments. On the other hand the use of GPUs in the low level trigger systems, characterized by stringent real-time constraints, such as tight time budget and high throughput, poses several challenges. In this paper we focus on the low level trigger in the CERN NA62 experiment, investigating the use of real-time computing on GPUs in this synchronous system. Our approach aimed at harvesting the GPU computing power to build in real-time refined physics-related trigger primitives for the RICH detector, as the the knowledge of Cerenkov rings parameters allows to build stringent conditions for data selection at trigger level. Latencies of all components of the trigger chain have...

  14. CMS Triggers for the LHC Startup

    CERN Document Server

    Nhan Nguyen, Chi

    2009-01-01

    The LHC will collide proton beams at a bunch-crossing rate of 40 MHz. At the design luminosity of $10^{34}$ cm$^{-2}$s$^{-1}$ each crossing results in an average of about 20 inelastic pp events. The CMS trigger system is designed to reduce the input rate to about 100 Hz. This task is carried out in two steps, namely the Level-1 (L1) and the High-Level trigger (HLT). The L1 trigger is built of customized fast electronics and is designed to reduce the rate to 100 kHz. The HLT is implemented in a filter farm running on hundreds of CPUs and is designed to reduce the rate by another factor of ~1000. It combines the traditional L2 and L3 trigger components in a novel way and allows the coherent tuning of the HLT algorithms to accommodate multiple physics channels. We will discuss the strategies for optimizing triggers covering the experiment`s early physics program.

  15. The Trigger for Early Running

    CERN Document Server

    The ATLAS Collaboration

    2009-01-01

    The ATLAS trigger and data acquisition system is based on three levels of event selection designed to capture the physics of interest with high efficiency from an initial bunch crossing rate of 40 MHz. The selections in the three trigger levels must provide sufficient rejection to reduce the rate to 200 Hz, compatible with offline computing power and storage capacity. The LHC is expected to begin its operation with a peak luminosity of 10^31 with a relatively small number of bunches, but quickly ramp up to higher luminosities by increasing the number of bunches, and thus the overall interaction rate. Decisions must be taken every 25 ns during normal LHC operations at the design luminosity of 10^34, where the average bunch crossing will contain more than 20 interactions. Hence, trigger selections must be deployed that can adapt to the changing beam conditions while preserving the interesting physics and satisfying varying detector requirements. In this paper, we provide a menu of trigger selections that can be...

  16. Rate Predictions and Trigger/DAQ Resource Monitoring in ATLAS

    CERN Document Server

    Schaefer, D M; The ATLAS collaboration

    2012-01-01

    Since starting in 2010, the Large Hadron Collider (LHC) has pro- duced collisions at an ever increasing rate. The ATLAS experiment successfully records the collision data with high eciency and excel- lent data quality. Events are selected using a three-level trigger system, where each level makes a more re ned selection. The level-1 trigger (L1) consists of a custom-designed hardware trigger which seeds two higher software based trigger levels. Over 300 triggers compose a trig- ger menu which selects physics signatures such as electrons, muons, particle jets, etc. Each trigger consumes computing resources of the ATLAS trigger system and oine storage. The LHC instantaneous luminosity conditions, desired physics goals of the collaboration, and the limits of the trigger infrastructure determine the composition of the ATLAS trigger menu. We describe a trigger monitoring frame- work for computing the costs of individual trigger algorithms such as data request rates and CPU consumption. This framework has been used...

  17. FPGA-based trigger system for the LUX dark matter experiment

    Science.gov (United States)

    Akerib, D. S.; Araújo, H. M.; Bai, X.; Bailey, A. J.; Balajthy, J.; Beltrame, P.; Bernard, E. P.; Bernstein, A.; Biesiadzinski, T. P.; Boulton, E. M.; Bradley, A.; Bramante, R.; Cahn, S. B.; Carmona-Benitez, M. C.; Chan, C.; Chapman, J. J.; Chiller, A. A.; Chiller, C.; Currie, A.; Cutter, J. E.; Davison, T. J. R.; de Viveiros, L.; Dobi, A.; Dobson, J. E. Y.; Druszkiewicz, E.; Edwards, B. N.; Faham, C. H.; Fiorucci, S.; Gaitskell, R. J.; Gehman, V. M.; Ghag, C.; Gibson, K. R.; Gilchriese, M. G. D.; Hall, C. R.; Hanhardt, M.; Haselschwardt, S. J.; Hertel, S. A.; Hogan, D. P.; Horn, M.; Huang, D. Q.; Ignarra, C. M.; Ihm, M.; Jacobsen, R. G.; Ji, W.; Kazkaz, K.; Khaitan, D.; Knoche, R.; Larsen, N. A.; Lee, C.; Lenardo, B. G.; Lesko, K. T.; Lindote, A.; Lopes, M. I.; Malling, D. C.; Manalaysay, A. G.; Mannino, R. L.; Marzioni, M. F.; McKinsey, D. N.; Mei, D.-M.; Mock, J.; Moongweluwan, M.; Morad, J. A.; Murphy, A. St. J.; Nehrkorn, C.; Nelson, H. N.; Neves, F.; O`Sullivan, K.; Oliver-Mallory, K. C.; Ott, R. A.; Palladino, K. J.; Pangilinan, M.; Pease, E. K.; Phelps, P.; Reichhart, L.; Rhyne, C.; Shaw, S.; Shutt, T. A.; Silva, C.; Skulski, W.; Solovov, V. N.; Sorensen, P.; Stephenson, S.; Sumner, T. J.; Szydagis, M.; Taylor, D. J.; Taylor, W.; Tennyson, B. P.; Terman, P. A.; Tiedt, D. R.; To, W. H.; Tripathi, M.; Tvrznikova, L.; Uvarov, S.; Verbus, J. R.; Webb, R. C.; White, J. T.; Whitis, T. J.; Witherell, M. S.; Wolfs, F. L. H.; Yin, J.; Young, S. K.; Zhang, C.

    2016-05-01

    LUX is a two-phase (liquid/gas) xenon time projection chamber designed to detect nuclear recoils resulting from interactions with dark matter particles. Signals from the detector are processed with an FPGA-based digital trigger system that analyzes the incoming data in real-time, with just a few microsecond latency. The system enables first pass selection of events of interest based on their pulse shape characteristics and 3D localization of the interactions. It has been shown to be > 99 % efficient in triggering on S2 signals induced by only few extracted liquid electrons. It is continuously and reliably operating since its full underground deployment in early 2013. This document is an overview of the systems capabilities, its inner workings, and its performance.

  18. FPGA-based trigger system for the LUX dark matter experiment

    Energy Technology Data Exchange (ETDEWEB)

    Akerib, D. S.; Araújo, H. M.; Bai, X.; Bailey, A. J.; Balajthy, J.; Beltrame, P.; Bernard, E. P.; Bernstein, A.; Biesiadzinski, T. P.; Boulton, E. M.; Bradley, A.; Bramante, R.; Cahn, S. B.; Carmona-Benitez, M. C.; Chan, C.; Chapman, J. J.; Chiller, A. A.; Chiller, C.; Currie, A.; Cutter, J. E.; Davison, T. J. R.; de Viveiros, L.; Dobi, A.; Dobson, J. E. Y.; Druszkiewicz, E.; Edwards, B. N.; Faham, C. H.; Fiorucci, S.; Gaitskell, R. J.; Gehman, V. M.; Ghag, C.; Gibson, K. R.; Gilchriese, M. G. D.; Hall, C. R.; Hanhardt, M.; Haselschwardt, S. J.; Hertel, S. A.; Hogan, D. P.; Horn, M.; Huang, D. Q.; Ignarra, C. M.; Ihm, M.; Jacobsen, R. G.; Ji, W.; Kazkaz, K.; Khaitan, D.; Knoche, R.; Larsen, N. A.; Lee, C.; Lenardo, B. G.; Lesko, K. T.; Lindote, A.; Lopes, M. I.; Malling, D. C.; Manalaysay, A. G.; Mannino, R. L.; Marzioni, M. F.; McKinsey, D. N.; Mei, D. -M.; Mock, J.; Moongweluwan, M.; Morad, J. A.; Murphy, A. St. J.; Nehrkorn, C.; Nelson, H. N.; Neves, F.; O׳Sullivan, K.; Oliver-Mallory, K. C.; Ott, R. A.; Palladino, K. J.; Pangilinan, M.; Pease, E. K.; Phelps, P.; Reichhart, L.; Rhyne, C.; Shaw, S.; Shutt, T. A.; Silva, C.; Skulski, W.; Solovov, V. N.; Sorensen, P.; Stephenson, S.; Sumner, T. J.; Szydagis, M.; Taylor, D. J.; Taylor, W.; Tennyson, B. P.; Terman, P. A.; Tiedt, D. R.; To, W. H.; Tripathi, M.; Tvrznikova, L.; Uvarov, S.; Verbus, J. R.; Webb, R. C.; White, J. T.; Whitis, T. J.; Witherell, M. S.; Wolfs, F. L. H.; Yin, J.; Young, S. K.; Zhang, C.

    2016-05-01

    LUX is a two-phase (liquid/gas) xenon time projection chamber designed to detect nuclear recoils resulting from interactions with dark matter particles. Signals from the detector are processed with an FPGA-based digital trigger system that analyzes the incoming data in real-time, with just a few microsecond latency. The system enables first pass selection of events of interest based on their pulse shape characteristics and 3D localization of the interactions. It has been shown to be >99% efficient in triggering on S2 signals induced by only few extracted liquid electrons. It is continuously and reliably operating since its full underground deployment in early 2013. This document is an overview of the systems capabilities, its inner workings, and its performance.

  19. Implementation of a Direct Link between the LHC Beam Interlock System and the LHC Beam Dumping System Re-Triggering Lines

    CERN Document Server

    Gabourin, S; Denz, R; Magnin, N; Uythoven, J; Wollmann, D; Zerlauth, M; Vatansever, V; Bartholdt, M; Bertsche, B; Zeiler, P

    2014-01-01

    To avoid damage of accelerator equipment due to impacting beam, the controlled removal of the LHC beams from the collider rings towards the dump blocks must be guaranteed at all times. When a beam dump is demanded, the Beam Interlock System communicates this request to the Trigger Synchronisation and Distribution System of the LHC Beam Dumping System. Both systems were built according to high reliability standards. To further reduce the risk of incapability to dump the beams in case of correlated failures in the Trigger Synchronisation and Distribution System, a new direct link from the Beam Interlock System to the re-triggering lines of the LHC Beam Dumping System will be implemented for the start-up with beam in 2015. The link represents a diverse redundancy to the current implementation, which should neither significantly increase the risk for so-called asynchronous beam dumps nor compromise machine availability. This paper describes the implementation choices of this link. Furthermore the results of a rel...

  20. Simulation of the ATLAS New Small Wheel trigger

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00399900; The ATLAS collaboration

    2018-01-01

    The instantaneous luminosity of the LHC will increase up to a factor of seven with respect to the original design value to explore physics at higher energy scale. The inner station of the ATLAS muon end-cap system (Small Wheel) will be replaced by the New Small Wheel (NSW) to benefit from the high luminosity. The NSW will provide precise track-segment information to the Level-1 trigger system in order to suppress the trigger rate from fake muon tracks. This article summarizes the NSW trigger decision system and track-segment finding algorithm implemented in the trigger processor, and discusses results of performance studies on the trigger system. The results demonstrate that the NSW trigger system is capable of working with good performance satisfying the requirements.

  1. The LHCb trigger

    International Nuclear Information System (INIS)

    Korolko, I.

    1998-01-01

    This paper describes progress in the development of the LHCb trigger system since the letter of intent. The trigger philosophy has significantly changed, resulting in an increase of trigger efficiency for signal B events. It is proposed to implement a level-1 vertex topology trigger in specialised hardware. (orig.)

  2. Towards a Level-1 Tracking Trigger for the ATLAS Experiment

    CERN Document Server

    De Santo, A; The ATLAS collaboration

    2016-01-01

    In preparation for the high-luminosity phase of the Large Hadron Collider, ATLAS is planning a trigger upgrade that will enable the experiment to use tracking information already at the first trigger level. This will provide enhanced background rejection power at trigger level while preserving much needed flexibility for the trigger system. The status and current plans for the new ATLAS Level-1 tracking trigger are presented.

  3. TRIGGER

    CERN Multimedia

    W. Smith

    At the December meeting, the CMS trigger group reported on progress in production, tests in the Electronics Integration Center (EIC) in Prevessin 904, progress on trigger installation in the underground counting room at point 5, USC55, and results from the Magnet Test and Cosmic Challenge (MTCC) phase II. The trigger group is engaged in the final stages of production testing, systems integration, and software and firmware development. Most systems are delivering final tested electronics to CERN. The installation in USC55 is underway and moving towards integration testing. A program of orderly connection and checkout with subsystems and central systems has been developed. This program includes a series of vertical subsystem slice tests providing validation of a portion of each subsystem from front-end electronics through the trigger and DAQ to data captured and stored. This is combined with operations and testing without beam that will continue until startup. The plans for start-up, pilot and early running tri...

  4. Upgrade of the CMS Global Muon Trigger

    CERN Document Server

    Jeitler, Manfred; Rabady, Dinyar; Sakulin, Hannes; Stahl, Achim

    2015-01-01

    The increase in center-of-mass energy and luminosity for Run-II of the Large Hadron Collider poses new challenges for the trigger systems of the experiments. To keep triggering with a similar performance as in Run-I, the CMS muon trigger is currently being upgraded. The new algorithms will provide higher resolution, especially for the muon transverse momentum and will make use of isolation criteria that combine calorimeter with muon information already in the level-1 trigger. The demands of the new algorithms can only be met by upgrading the level-1 trigger system to new powerful FPGAs with high bandwidth I/O. The processing boards will be based on the new μTCA standard. We report on the planned algorithms for the upgraded Global Muon Trigger (μGMT) which sorts and removes duplicates from boundaries of the muon trigger sub-systems. Furthermore, it determines how isolated the muon candidates are based on calorimetric energy deposits. The μGMT will be implemented using a processing board that features a larg...

  5. Upgrade of the CMS Global Muon Trigger

    CERN Document Server

    Lingemann, Joschka; Sakulin, Hannes; Jeitler, Manfred; Stahl, Achim

    2015-01-01

    The increase in center-of-mass energy and luminosity for Run 2 of the Large Hadron Collider pose new challenges for the trigger systems of the experiments. To keep triggering with a similar performance as in Run 1, the CMS muon trigger is currently being upgraded. The new algorithms will provide higher resolution, especially for the muon transverse momentum and will make use of isolation criteria that combine calorimeter with muon information already in the level-1 trigger. The demands of the new algorithms can only be met by upgrading the level-1 trigger system to new powerful FPGAs with high bandwidth I/O. The processing boards will be based on the new microTCA standard. We report on the planned algorithms for the upgraded Global Muon Trigger (GMT) which combines information from the muon trigger sub-systems and assigns the isolation variable. The upgraded GMT will be implemented using a Master Processor 7 card, built by Imperial College, that features a large Xilinx Virtex 7 FPGA. Up to 72 optical links at...

  6. Graphics Processors in HEP Low-Level Trigger Systems

    International Nuclear Information System (INIS)

    Ammendola, Roberto; Biagioni, Andrea; Chiozzi, Stefano; Ramusino, Angelo Cotta; Cretaro, Paolo; Lorenzo, Stefano Di; Fantechi, Riccardo; Fiorini, Massimiliano; Frezza, Ottorino; Lamanna, Gianluca; Cicero, Francesca Lo; Lonardo, Alessandro; Martinelli, Michele; Neri, Ilaria; Paolucci, Pier Stanislao; Pastorelli, Elena; Piandani, Roberto; Pontisso, Luca; Rossetti, Davide; Simula, Francesco; Sozzi, Marco; Vicini, Piero

    2016-01-01

    Usage of Graphics Processing Units (GPUs) in the so called general-purpose computing is emerging as an effective approach in several fields of science, although so far applications have been employing GPUs typically for offline computations. Taking into account the steady performance increase of GPU architectures in terms of computing power and I/O capacity, the real-time applications of these devices can thrive in high-energy physics data acquisition and trigger systems. We will examine the use of online parallel computing on GPUs for the synchronous low-level trigger, focusing on tests performed on the trigger system of the CERN NA62 experiment. To successfully integrate GPUs in such an online environment, latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Furthermore, it is assessed how specific trigger algorithms can be parallelized and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen Large Hadron Collider (LHC) luminosity upgrade where highly selective algorithms will be essential to maintain sustainable trigger rates with very high pileup

  7. Data analysis at Level-1 Trigger level

    CERN Document Server

    Wittmann, Johannes; Aradi, Gregor; Bergauer, Herbert; Jeitler, Manfred; Wulz, Claudia; Apanasevich, Leonard; Winer, Brian; Puigh, Darren Michael

    2017-01-01

    With ever increasing luminosity at the LHC, optimum online data selection is getting more and more important. While in the case of some experiments (LHCb and ALICE) this task is being completely transferred to computer farms, the others - ATLAS and CMS - will not be able to do this in the medium-term future for technological, detector-related reasons. Therefore, these experiments pursue the complementary approach of migrating more and more of the offline and High-Level Trigger intelligence into the trigger electronics. This paper illustrates how the Level-1 Trigger of the CMS experiment and in particular its concluding stage, the Global Trigger, take up this challenge.

  8. Graphics Processing Units for HEP trigger systems

    International Nuclear Information System (INIS)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.

    2016-01-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  9. Graphics Processing Units for HEP trigger systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R. [INFN Sezione di Roma “Tor Vergata”, Via della Ricerca Scientifica 1, 00133 Roma (Italy); Bauce, M. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Biagioni, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Chiozzi, S.; Cotta Ramusino, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Fantechi, R. [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); CERN, Geneve (Switzerland); Fiorini, M. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Giagu, S. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Gianoli, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Lamanna, G., E-mail: gianluca.lamanna@cern.ch [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN Laboratori Nazionali di Frascati, Via Enrico Fermi 40, 00044 Frascati (Roma) (Italy); Lonardo, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Messina, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); and others

    2016-07-11

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  10. GPUs for real-time processing in HEP trigger systems

    CERN Document Server

    Ammendola, R; Deri, L; Fiorini, M; Frezza, O; Lamanna, G; Lo Cicero, F; Lonardo, A; Messina, A; Sozzi, M; Pantaleo, F; Paolucci, Ps; Rossetti, D; Simula, F; Tosoratto, L; Vicini, P

    2014-01-01

    We describe a pilot project (GAP - GPU Application Project) for the use of GPUs (Graphics processing units) for online triggering applications in High Energy Physics experiments. Two major trends can be identied in the development of trigger and DAQ systems for particle physics experiments: the massive use of general-purpose commodity systems such as commercial multicore PC farms for data acquisition, and the reduction of trigger levels implemented in hardware, towards a fully software data selection system (\\trigger-less"). The innovative approach presented here aims at exploiting the parallel computing power of commercial GPUs to perform fast computations in software not only in high level trigger levels but also in early trigger stages. General-purpose computing on GPUs is emerging as a new paradigm in several elds of science, although so far applications have been tailored to the specic strengths of such devices as accelerators in oine computation. With the steady reduction of GPU latencies, and the incre...

  11. A high-resolution TDC-based board for a fully digital trigger and data acquisition system in the NA62 experiment at CERN

    CERN Document Server

    Pedreschi, Elena; Angelucci, Bruno; Avanzini, Carlo; Galeotti, Stefano; Lamanna, Gianluca; Magazzù, Guido; Pinzino, Jacopo; Piandani, Roberto; Sozzi, Marco; Spinella, Franco; Venditti, Stefano

    2015-01-01

    A Time to Digital Converter (TDC) based system, to be used for most sub-detectors in the high-flux rare-decay experiment NA62 at CERN SPS, was built as part of the NA62 fully digital Trigger and Data AcQuisition system (TDAQ), in which the TDC Board (TDCB) and a general-purpose motherboard (TEL62) will play a fundamental role. While TDCBs, housing four High Performance Time to Digital Converters (HPTDC), measure hit times from sub-detectors, the motherboard processes and stores them in a buffer, produces trigger primitives from different detectors and extracts only data related to the lowest trigger level decision, once this is taken on the basis of the trigger primitives themselves. The features of the TDCB board developed by the Pisa NA62 group are extensively discussed and performance data is presented in order to show its compliance with the experiment requirements.

  12. Electronic trigger for the ASP experiment

    International Nuclear Information System (INIS)

    Wilson, R.J.

    1985-11-01

    The Anomalous Single Photon (ASP) electronic trigger is described. The experiments is based on an electromagnetic calorimeter composed of arrays of lead glass blocks, read out with photo-multiplier tubes, surrounding the interaction point at the PEP storage ring. The primary requirement of the trigger system is to be sensitive to low energy (approx. =0.5 GeV and above) photons whilst discriminating against high backgrounds at PEP. Analogue summing of the PMT signals and a sequence of programmable digital look-up tables produces a ''dead-timeless'' trigger for the beam collision rate of 408 kHz. 6 refs., 6 figs

  13. The ATLAS Trigger System: Ready for Run-2

    CERN Document Server

    Maeda, Junpei; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 and a software based high-level trigger that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the data-taking period of Run-2 the LHC will operate at a centre-of-mass energy of about 13 TeV resulting in roughly five times higher trigger rates. In these proceedings, we briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the Level-1 calorimeter and muon trigger system, the introduction of a new Level-1 topological trigger module and themerging of the previously two-level higher-level trigger system into a single even...

  14. Secondary Restoration Control of Islanded Microgrids With Decentralized Event-triggered Strategy

    DEFF Research Database (Denmark)

    Guerrero, Josep M.; Chen, Meng; Xiao, Xiangning

    2018-01-01

    in the feedback control laws, the proposed control strategies just require the communication between distributed secondary controllers at some particular instants while having frequency and voltage restoration function and accurate active power sharing. The stability and inter-event interval are also analyzed......Distributed cooperative control methods attract more and more attention in microgrid secondary control because they are more reliable and flexible. However, the traditional methods rely on the periodic communication, which is neither economic nor efficient due to its large communication burden...... in this paper. An islanded microgrid test system is built in PSCAD/EMTDC to validate the proposed control strategies. It shows that the proposed secondary control strategies based on event-triggered approach can highly reduce the inter-agent communication....

  15. SSC physics signatures and trigger requirements

    International Nuclear Information System (INIS)

    1985-01-01

    Strategies are considered for triggering on new physics processes on the environment of the SSC, where interaction rates will be very high and most new physics processes quite rare. The quantities available for use in the trigger at various levels are related to the signatures of possible new physics. Two examples were investigated in some detail using the ISAJET Monte Carlo program: Higgs decays to W pairs and a missing energy trigger applied to gluino pair production. In both of the examples studied in detail, it was found that workable strategies for reducing the trigger rate were obtainable which also produced acceptable efficiency for the processes of interest. In future work, it will be necessary to carry out such a program for the full spectrum of suggested new physics

  16. Impact of High-Reliability Education on Adverse Event Reporting by Registered Nurses.

    Science.gov (United States)

    McFarland, Diane M; Doucette, Jeffrey N

    Adverse event reporting is one strategy to identify risks and improve patient safety, but, historically, adverse events are underreported by registered nurses (RNs) because of fear of retribution and blame. A program was provided on high reliability to examine whether education would impact RNs' willingness to report adverse events. Although the findings were not statistically significant, they demonstrated a positive impact on adverse event reporting and support the need to create a culture of high reliability.

  17. Sum-Trigger-II status and prospective physics

    Energy Technology Data Exchange (ETDEWEB)

    Dazzi, Francesco; Mirzoyan, Razmik; Schweizer, Thomas; Teshima, Masahiro [Max Planck Institut fuer Physik, Munich (Germany); Herranz, Diego; Lopez, Marcos [Universidad Complutense, Madrid (Spain); Mariotti, Mose [Universita degli Studi di Padova (Italy); Nakajima, Daisuke [The University of Tokio (Japan); Rodriguez Garcia, Jezabel [Max Planck Institut fuer Physik, Munich (Germany); Instituto Astrofisico de Canarias, Tenerife (Spain)

    2015-07-01

    MAGIC is a stereoscopic system of 2 Imaging Air Cherenkov Telescopes (IACTs) for very high energy gamma-ray astronomy, located at La Palma (Spain). Lowering the energy threshold of IACTs is crucial for the observation of Pulsars, high redshift AGNs and GRBs. A novel trigger strategy, based on the analogue sum of a patch of pixels, can lead to a lower threshold compared to conventional digital triggers. In the last years, a major upgrade of the MAGIC telescopes took place in order to optimize the performances, mainly in the low energy domain. The PMTs camera and the reflective surface of MAGIC-I, as well as both readout systems, have been deeply renovated. The last important milestone is the implementation of a new stereoscopic analogue trigger, dubbed Sum-Trigger-II. The installation successfully ended in 2014 and the first data set has been already taken. Currently the fine-tuning of the main parameters as well as the comparison with Monte Carlo studies is ongoing. In this talk the status of Sum-Trigger-II and the future prospective physics cases at very low energy are presented.

  18. Gearbox Reliability Collaborative High-Speed Shaft Calibration

    Energy Technology Data Exchange (ETDEWEB)

    Keller, J.; McNiff, B.

    2014-09-01

    Instrumentation has been added to the high-speed shaft, pinion, and tapered roller bearing pair of the Gearbox Reliability Collaborative gearbox to measure loads and temperatures. The new shaft bending moment and torque instrumentation was calibrated and the purpose of this document is to describe this calibration process and results, such that the raw shaft bending and torque signals can be converted to the proper engineering units and coordinate system reference for comparison to design loads and simulation model predictions.

  19. The LPS trigger system

    International Nuclear Information System (INIS)

    Benotto, F.; Costa, M.; Staiano, A.; Zampieri, A.; Bollito, M.; Isoardi, P.; Pernigotti, E.; Sacchi, R.; Trapani, P.P.; Larsen, H.; Massam, T.; Nemoz, C.

    1996-03-01

    The Leading Proton Spectrometer (LPS) has been equipped with microstrip silicon detectors specially designed to trigger events with high values of x L vertical stroke anti p' p vertical stroke / vertical stroke anti p p vertical stroke ≥0.95 where vertical stroke anti p' p vertical stroke and vertical stroke anti p p vertical stroke are respectively the momenta of outgoing and incoming protons. The LPS First Level Trigger can provide a clear tag for very high momentum protons in a kinematical region never explored before. In the following we discuss the physics motivation in tagging very forward protons and present a detailed description of the detector design, the front end electronics, the readout electronics, the Monte Carlo simulation and some preliminary results from 1995 data taking. (orig.)

  20. Learning Organizations in High Reliability Industries

    International Nuclear Information System (INIS)

    Schwalbe, D.; Wächter, C.

    2016-01-01

    Full text: Humans make mistakes. Sometimes we learn from them. In a high reliability organization we have to learn before an error leads to an incident (or even accident). Therefore the “human factor” is most important as most of the time the human is the last line of defense. The “human factor” is more than communication or leadership skills. At the end, it is the personal attitude. This attitude has to be safety minded. And this attitude has to be self-reflected continuously. Moreover, feedback from others is urgently needed to improve one’s personal skills daily and learn from our own experience as well as from others. (author

  1. Finite-Horizon $H_\\infty $ Consensus for Multiagent Systems With Redundant Channels via An Observer-Type Event-Triggered Scheme.

    Science.gov (United States)

    Xu, Wenying; Wang, Zidong; Ho, Daniel W C

    2018-05-01

    This paper is concerned with the finite-horizon consensus problem for a class of discrete time-varying multiagent systems with external disturbances and missing measurements. To improve the communication reliability, redundant channels are introduced and the corresponding protocol is constructed for the information transmission over redundant channels. An event-triggered scheme is adopted to determine whether the information of agents should be transmitted to their neighbors. Subsequently, an observer-type event-triggered control protocol is proposed based on the latest received neighbors' information. The purpose of the addressed problem is to design a time-varying controller based on the observed information to achieve the consensus performance in a finite horizon. By utilizing a constrained recursive Riccati difference equation approach, some sufficient conditions are obtained to guarantee the consensus performance, and the controller parameters are also designed. Finally, a numerical example is provided to demonstrate the desired reliability of redundant channels and the effectiveness of the event-triggered control protocol.

  2. Physics performances with the new ATLAS Level-1 Topological trigger in the LHC High-Luminosity Era

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00414333; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger system aim at reducing the 40 MHz protons collision event rate to a manageable event storage rate of 1 kHz, preserving events with valuable physics meaning. The Level-1 trigger is the first rate-reducing step in the ATLAS trigger system, with an output rate of 100 kHz and decision latency of less than 2.5 micro seconds. It is composed of the calorimeter trigger, muon trigger and central trigger processor. During the last upgrade, a new electronics element was introduced to Level-1: L1Topo, the Topological Processor System. It will make it possible to use detailed realtime information from the Level-1 calorimeter and muon triggers, processed in individual state of the art FPGA processors to determine angles between jets and/or leptons and calculate kinematic variables based on lists of selected/sorted objects. Over hundred VHDL algorithms are producing trigger outputs to be incorporated into the central trigger processor. Such information will be essential to improve background rejection and ...

  3. LHCb: LHCb High Level Trigger design issues for post Long Stop 1 running

    CERN Multimedia

    Albrecht, J; Raven, G; Sokoloff, M D; Williams, M

    2013-01-01

    The LHCb High Level Trigger uses two stages of software running on an Event Filter Farm (EFF) to select events for offline reconstruction and analysis. The first stage (Hlt1) processes approximately 1 MHz of events accepted by a hardware trigger. In 2012, the second stage (Hlt2) wrote 5 kHz to permanent storage for later processing. Following the LHC's Long Stop 1 (anticipated for 2015), the machine energy will increase from 8 TeV in the center-of-mass to 13 TeV and the cross sections for beauty and charm are expected to grow proportionately. We plan to increase the Hlt2 output to 12 kHz, some for immediate offline processing, some for later offline processing, and some ready for immediate analysis. By increasing the absolute computing power of the EFF, and buffering data for processing between machine fills, we should be able to significantly increase the efficiency for signal while improving signal-to-background ratios. In this poster we will present several strategies under consideration and some of th...

  4. The ATLAS Trigger algorithms upgrade and performance in Run 2

    CERN Document Server

    Bernius, Catrin; The ATLAS collaboration

    2017-01-01

    Title: The ATLAS Trigger algorithms upgrade and performance in Run 2 (TDAQ) The ATLAS trigger has been used very successfully for the online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a center-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. At the Level-1 trigger the undertaken impr...

  5. Scintillation trigger system of the liquid argon neutrino detector

    International Nuclear Information System (INIS)

    Belikov, S.V.; Gurzhiev, S.N.; Gutnikov, Yu.E.; Denisov, A.G.; Kochetkov, V.I.; Matveev, M.Yu.; Mel'nikov, E.A.; Usachev, A.P.

    1994-01-01

    This paper presents the organization of the Scintillation Trigger System (STS) for the Liquid Argon Neutrino Detector of the Tagged Neutrino Facility. STS is aimed at the effective registration of the needed neutrino interaction type and production of a fast trigger signal with high time resolution. The fast analysis system of analog signal from the trigger scintillation planes for rejection of the trigger signals from background processes is described. Real scintillation trigger planes characteristics obtained on the basis of the presented data acquisition system are shown. 10 refs., 12 figs., 3 tabs

  6. BAT Triggering Performance

    Science.gov (United States)

    McLean, Kassandra M.; Fenimore, E. E.; Palmer, D. M.; BAT Team

    2006-09-01

    The Burst Alert Telescope (BAT) onboard Swift has detected and located about 160 gamma-ray bursts (GRBs) in its first twenty months of operation. BAT employs two triggering systems to find GRBs: image triggering, which looks for a new point source in the field of view, and rate triggering, which looks for a significant increase in the observed counts. The image triggering system looks at 1 minute, 5 minute, and full pointing accumulations of counts in the detector plane in the energy range of 15-50 keV, with about 50 evaluations per pointing (about 40 minutes). The rate triggering system looks through 13 different time scales (from 4ms to 32s), 4 overlapping energy bins (covering 15-350 keV), 9 regions of the detector plane (from the full plane to individual quarters), and two background sampling models to search for GRBs. It evaluates 27000 trigger criteria in a second, for close to 1000 criteria. The image triggering system looks at 1, 5, and 40 minute accumulations of counts in the detector plane in the energy range of 15-50 keV. Both triggering systems are working very well with the settings from before launch and after we turned on BAT. However, we now have more than a year and a half of data to evaluate these triggering systems and tweak them for optimal performance, as well as lessons learned from these triggering systems.

  7. Triggering on New Physics with the CMS Detector

    Energy Technology Data Exchange (ETDEWEB)

    Bose, Tulika [Boston Univ., MA (United States)

    2016-07-29

    The BU CMS group led by PI Tulika Bose has made several significant contributions to the CMS trigger and to the analysis of the data collected by the CMS experiment. Group members have played a leading role in the optimization of trigger algorithms, the development of trigger menus, and the online operation of the CMS High-Level Trigger. The group’s data analysis projects have concentrated on a broad spectrum of topics that take full advantage of their strengths in jets and calorimetry, trigger, lepton identification as well as their considerable experience in hadron collider physics. Their publications cover several searches for new heavy gauge bosons, vector-like quarks as well as diboson resonances.

  8. Educational Management Organizations as High Reliability Organizations: A Study of Victory's Philadelphia High School Reform Work

    Science.gov (United States)

    Thomas, David E.

    2013-01-01

    This executive position paper proposes recommendations for designing reform models between public and private sectors dedicated to improving school reform work in low performing urban high schools. It reviews scholarly research about for-profit educational management organizations, high reliability organizations, American high school reform, and…

  9. A System for Monitoring and Tracking the LHC Beam Spot within the ATLAS High Level Trigger

    CERN Document Server

    Bartoldus, R; The ATLAS collaboration; Cogan, J; Salnikov, A; Strauss, E; Winklmeier, F

    2012-01-01

    The parameters of the beam spot produced by the LHC in the ATLAS interaction region are computed online using the ATLAS High Level Trigger (HLT) system. The high rate of triggered events is exploited to make precise measurements of the position, size and orientation of the luminous region in near real-time, as these parameters change significantly even during a single data-taking run. We present the challenges, solutions and results for the online determination, monitoring and beam spot feedback system in ATLAS. A specially designed algorithm, which uses tracks registered in the silicon detectors to reconstruct event vertices, is executed on the HLT processor farm of several thousand CPU cores. Monitoring histograms from all the cores are sampled and aggregated across the farm every 60 seconds. The reconstructed beam values are corrected for detector resolution effects, measured in situ from the separation of vertices whose tracks have been split into two collections. Furthermore, measurements for individual ...

  10. TRIGGER

    CERN Multimedia

    W. Smith

    Level-1 Trigger Hardware The CERN group is working on the TTC system. Seven out of nine sub-detector TTC VME crates with all fibers cabled are installed in USC55. 17 Local Trigger Controller (LTC) boards have been received from production and are in the process of being tested. The RF2TTC module replacing the TTCmi machine interface has been delivered and will replace the TTCci module used to mimic the LHC clock. 11 out of 12 crates housing the barrel ECAL off-detector electronics have been installed in USC55 after commissioning at the Electronics Integration Centre in building 904. The cabling to the Regional Calorimeter Trigger (RCT) is terminated. The Lisbon group has completed the Synchronization and Link mezzanine board (SLB) production. The Palaiseau group has fully tested and installed 33 out of 40 Trigger Concentrator Cards (TCC). The seven remaining boards are being remade. The barrel TCC boards have been tested at the H4 test beam, and good agreement with emulator predictions were found. The cons...

  11. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Neubauer, Mark S

    2011-01-01

    In hadron collider experiments, triggering the detector to store interesting events for offline analysis is a challenge due to the high rates and multiplicities of particles produced. Maintaining high trigger efficiency for the physics we are most interested in while at the same time suppressing high rate physics from inclusive QCD processes is a difficult but important problem. It is essential that the trigger system be flexible and robust, with sufficient redundancy and operating margin. Providing high quality track reconstruction over the full ATLAS detector by the start of processing at LVL2 is an important element to achieve these needs. As the instantaneous luminosity increases, the computational load on the LVL2 system will significantly increase due to the need for more sophisticated algorithms to suppress backgrounds. The Fast Tracker (FTK) is a proposed upgrade to the ATLAS trigger system. It is designed to enable early rejection of background events and thus leave more LVL2 execution time by moving...

  12. The ATLAS Trigger System : Ready for Run-2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00211007; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware based Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the course of the ongoing Run-2 data-taking campaign at 13 TeV centre-of-mass energy the trigger rates will be approximately 5 times higher compared to Run-1. In these proceedings we briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter and muon trigger system, the introduction of a new L1 topological trigger subsystem and the merging of the previously two-level HLT system into a single ev...

  13. ATLAS LAr Calorimeter Trigger Electronics Phase-1 Upgrade

    CERN Document Server

    Aad, Georges; The ATLAS collaboration

    2017-01-01

    The upgrade of the Large Hadron Collider (LHC) scheduled for a shut-down period of 2019-2020, referred to as the Phase-I upgrade, will increase the instantaneous luminosity to about three times the design value. Since the current ATLAS trigger system does not allow sufficient increase of the trigger rate, an improvement of the trigger system is required. The Liquid Argon (LAr) Calorimeter read-out will therefore be modified to use digital trigger signals with a higher spatial granularity in order to improve the identification efficiencies of electrons, photons, tau, jets and missing energy, at high background rejection rates at the Level-1 trigger. The new trigger signals will be arranged in 34000 so-called Super Cells which achieves 5-10 times better granularity than the trigger towers currently used and allows an improved background rejection. The readout of the trigger signals will process the signal of the Super Cells at every LHC bunch-crossing at 12-bit precision and a frequency of 40 MHz. The data will...

  14. The CMS Barrel Muon trigger upgrade

    International Nuclear Information System (INIS)

    Triossi, A.; Sphicas, P.; Bellato, M.; Montecassiano, F.; Ventura, S.; Ruiz, J.M. Cela; Bedoya, C. Fernandez; Tobar, A. Navarro; Fernandez, I. Redondo; Ferrero, D. Redondo; Sastre, J.; Ero, J.; Wulz, C.; Flouris, G.; Foudas, C.; Loukas, N.; Mallios, S.; Paradas, E.; Guiducci, L.; Masetti, G.

    2017-01-01

    The increase of luminosity expected by LHC during Phase1 will impose tighter constraints for rate reduction in order to maintain high efficiency in the CMS Level1 trigger system. The TwinMux system is the early layer of the muon barrel region that concentrates the information from different subdetectors: Drift Tubes, Resistive Plate Chambers and Outer Hadron Calorimeter. It arranges the slow optical trigger links from the detector chambers into faster links (10 Gbps) that are sent in multiple copies to the track finders. Results from collision runs, that confirm the satisfactory operation of the trigger system up to the output of the barrel track finder, will be shown.

  15. The LHCb trigger in Run II

    CERN Document Server

    Michielin, Emanuele

    2016-01-01

    The LHCb trigger system has been upgraded to allow alignment, calibration and physics analysis to be performed in real time. An increased CPU capacity and improvements in the software have allowed lifetime unbiased selections of beauty and charm decays in the high level trigger. Thanks to offline quality event reconstruction already available online, physics analyses can be performed directly on this information and for the majority of charm physics selections a reduced event format can be written out. Beauty hadron decays are more efficiently triggered by re-optimised inclusive selections, and the HLT2 output event rate is increased by a factor of three.

  16. Upgrades of the ATLAS trigger system

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00221618; The ATLAS collaboration

    2018-01-01

    In coming years the LHC is expected to undergo upgrades to increase both the energy of proton-proton collisions and the instantaneous luminosity. In order to cope with these more challenging LHC conditions, upgrades of the ATLAS trigger system will be required. This talk will focus on some of the key aspects of these upgrades. Firstly, the upgrade period between 2019-2021 will see an increase in instantaneous luminosity to $3\\times10^{34} \\rm{cm^{-2}s^{-1}}$. Upgrades to the Level 1 trigger system during this time will include improvements for both the muon and calorimeter triggers. These include the upgrade of the first-level Endcap Muon trigger, the calorimeter trigger electronics and the addition of new calorimeter feature extractor hardware, such as the Global Feature Extractor (gFEX). An overview will be given on the design and development status the aforementioned systems, along with the latest testing and validation results. \\\\ By 2026, the High Luminosity LHC will be able to deliver 14 TeV collisions ...

  17. Online software trigger at PANDA/FAIR

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Donghee; Kliemt, Ralf; Nerling, Frank [Helmholtz-Institut Mainz (Germany); Denig, Achim [Institut fuer Kernphysik, Universitaet Mainz (Germany); Goetzen, Klaus; Peters, Klaus [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH (Germany); Collaboration: PANDA-Collaboration

    2014-07-01

    The PANDA experiment at FAIR will employ a novel trigger-less read-out system. Since a conventional hardware trigger concept is not suitable for PANDA, a high level online event filter will be applied to perform fast event selection based on physics properties of the reconstructed events. A trigger-less data stream implies an event selection with track reconstruction and pattern recognition to be performed online, and thus analysing data under real time conditions at event rates of up to 40 MHz.The projected data rate reduction of about three orders of magnitude requires an effective background rejection, while retaining interesting signal events. Real time event selection in the environment of hadronic reactions is rather challenging and relies on sophisticated algorithms for the software trigger. The implementation and the performance of physics trigger algorithms presently studied with realistic Monte Carlo simulations is discussed. The impact of parameters such as momentum or mass resolution, PID probability, vertex reconstruction and a multivariate analysis using the TMVA package for event filtering is presented.

  18. High level issues in reliability quantification of safety-critical software

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2012-01-01

    For the purpose of developing a consensus method for the reliability assessment of safety-critical digital instrumentation and control systems in nuclear power plants, several high level issues in reliability assessment of the safety-critical software based on Bayesian belief network modeling and statistical testing are discussed. Related to the Bayesian belief network modeling, the relation between the assessment approach and the sources of evidence, the relation between qualitative evidence and quantitative evidence, how to consider qualitative evidence, and the cause-consequence relation are discussed. Related to the statistical testing, the need of the consideration of context-specific software failure probabilities and the inability to perform a huge number of tests in the real world are discussed. The discussions in this paper are expected to provide a common basis for future discussions on the reliability assessment of safety-critical software. (author)

  19. Transferring Aviation Practices into Clinical Medicine for the Promotion of High Reliability.

    Science.gov (United States)

    Powell-Dunford, Nicole; McPherson, Mark K; Pina, Joseph S; Gaydos, Steven J

    2017-05-01

    Aviation is a classic example of a high reliability organization (HRO)-an organization in which catastrophic events are expected to occur without control measures. As health care systems transition toward high reliability, aviation practices are increasingly transferred for clinical implementation. A PubMed search using the terms aviation, crew resource management, and patient safety was undertaken. Manuscripts authored by physician pilots and accident investigation regulations were analyzed. Subject matter experts involved in adoption of aviation practices into the medical field were interviewed. A PubMed search yielded 621 results with 22 relevant for inclusion. Improved clinical outcomes were noted in five research trials in which aviation practices were adopted, particularly with regard to checklist usage and crew resource-management training. Effectiveness of interventions was influenced by intensity of application, leadership involvement, and provision of staff training. The usefulness of incorporating mishap investigation techniques has not been established. Whereas aviation accident investigation is highly standardized, the investigation of medical error is characterized by variation. The adoption of aviation practices into clinical medicine facilitates an evolution toward high reliability. Evidence for the efficacy of the checklist and crew resource-management training is robust. Transference of aviation accident investigation practices is preliminary. A standardized, independent investigation process could facilitate the development of a safety culture commensurate with that achieved in the aviation industry.Powell-Dunford N, McPherson MK, Pina JS, Gaydos SJ. Transferring aviation practices into clinical medicine for the promotion of high reliability. Aerosp Med Hum Perform. 2017; 88(5):487-491.

  20. Reliability of force-velocity relationships during deadlift high pull.

    Science.gov (United States)

    Lu, Wei; Boyas, Sébastien; Jubeau, Marc; Rahmani, Abderrahmane

    2017-11-13

    This study aimed to evaluate the within- and between-session reliability of force, velocity and power performances and to assess the force-velocity relationship during the deadlift high pull (DHP). Nine participants performed two identical sessions of DHP with loads ranging from 30 to 70% of body mass. The force was measured by a force plate under the participants' feet. The velocity of the 'body + lifted mass' system was calculated by integrating the acceleration and the power was calculated as the product of force and velocity. The force-velocity relationships were obtained from linear regression of both mean and peak values of force and velocity. The within- and between-session reliability was evaluated by using coefficients of variation (CV) and intraclass correlation coefficients (ICC). Results showed that DHP force-velocity relationships were significantly linear (R² > 0.90, p  0.94), mean and peak velocities showed a good agreement (CV reliable and can therefore be utilised as a tool to characterise individuals' muscular profiles.

  1. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Neubauer, M; The ATLAS collaboration

    2011-01-01

    In hadron collider experiments, triggering the detector to store interesting events for offline analysis is a challenge due to the high rates and multiplicities of particles produced. The LHC will soon operate at a center-of-mass energy of 14 TeV and at high instantaneous luminosities of the order of $10^{34}$ to $10^{35}$ cm$^{-2}$ s$^{-1}$. A multi-level trigger strategy is used in ATLAS, with the first level (LVL1) implemented in hardware and the second and third levels (LVL2 and EF) implemented in a large computer farm. Maintaining high trigger efficiency for the physics we are most interested in while at the same time suppressing high rate physics from inclusive QCD processes is a difficult but important problem. It is essential that the trigger system be flexible and robust, with sufficient redundancy and operating margin. Providing high quality track reconstruction over the full ATLAS detector by the start of processing at LVL2 is an important element to achieve these needs. As the instantaneous lumino...

  2. Direct unavailability computation of a maintained highly reliable system

    Czech Academy of Sciences Publication Activity Database

    Briš, R.; Byczanski, Petr

    2010-01-01

    Roč. 224, č. 3 (2010), s. 159-170 ISSN 1748-0078 Grant - others:GA Mšk(CZ) MSM6198910007 Institutional research plan: CEZ:AV0Z30860518 Keywords : high reliability * availability * directed acyclic graph Subject RIV: BA - General Mathematics http:// journals .pepublishing.com/content/rtp3178l17923m46/

  3. Development of the new trigger processor board for the ATLAS Level-1 endcap muon trigger for Run-3

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00525035; The ATLAS collaboration

    2017-01-01

    The instantaneous luminosity of the LHC will be increased by up to a factor of three with respect to the original design value at Run-3 (starting 2021). The ATLAS Level-1 end-cap muon trigger in LHC Run-3 will identify muons by combining data from the Thin-Gap Chamber detector (TGC) and the New Small Wheel (NSW), which is a new detector and will be able to operate in a high background hit rate at Run-3, to suppress the Level-1 trigger rate. In order to handle data from both TGC and NSW, a new trigger processor board has been developed. The board has a modern FPGA to make use of Multi-Gigabit transceiver technology. The readout system for trigger data has also been designed with TCP/IP instead of a dedicated ASIC. This letter presents the electronics and its firmware of the ATLAS Level-1 end-cap muon trigger processor board for LHC Run-3.

  4. Sensor-triggered sampling to determine instantaneous airborne vapor exposure concentrations.

    Science.gov (United States)

    Smith, Philip A; Simmons, Michael K; Toone, Phillip

    2018-06-01

    It is difficult to measure transient airborne exposure peaks by means of integrated sampling for organic chemical vapors, even with very short-duration sampling. Selection of an appropriate time to measure an exposure peak through integrated sampling is problematic, and short-duration time-weighted average (TWA) values obtained with integrated sampling are not likely to accurately determine actual peak concentrations attained when concentrations fluctuate rapidly. Laboratory analysis for integrated exposure samples is preferred from a certainty standpoint over results derived in the field from a sensor, as a sensor user typically must overcome specificity issues and a number of potential interfering factors to obtain similarly reliable data. However, sensors are currently needed to measure intra-exposure period concentration variations (i.e., exposure peaks). In this article, the digitized signal from a photoionization detector (PID) sensor triggered collection of whole-air samples when toluene or trichloroethylene vapors attained pre-determined levels in a laboratory atmosphere generation system. Analysis by gas chromatography-mass spectrometry of whole-air samples (with both 37 and 80% relative humidity) collected using the triggering mechanism with rapidly increasing vapor concentrations showed good agreement with the triggering set point values. Whole-air samples (80% relative humidity) in canisters demonstrated acceptable 17-day storage recoveries, and acceptable precision and bias were obtained. The ability to determine exceedance of a ceiling or peak exposure standard by laboratory analysis of an instantaneously collected sample, and to simultaneously provide a calibration point to verify the correct operation of a sensor was demonstrated. This latter detail may increase the confidence in reliability of sensor data obtained across an entire exposure period.

  5. The DELPHI Trigger System at LEP2 Energies

    CERN Document Server

    Augustinus, A; Charpentier, P; De Wulf, J P; Fontanelli, F; Formenti, F; Gaspar, C; Gavillet, P; Goorens, R; Laugier, J P; Musico, P; Paganoni, M; Sannino, M; Valenti, G

    2003-01-01

    In this paper we describe the modifications carried out on the DELPHI trigger complex since the beginning of the high energy runs of LEP. The descriptions of the trigger configurations and performances for the 2000 data taking period are also presented.

  6. Implementing eco friendly highly reliable upload feature using multi 3G service

    Science.gov (United States)

    Tanutama, Lukas; Wijaya, Rico

    2017-12-01

    The current trend of eco friendly Internet access is preferred. In this research the understanding of eco friendly is minimum power consumption. The devices that are selected have operationally low power consumption and normally have no power consumption as they are hibernating during idle state. To have the reliability a router of a router that has internal load balancing feature will provide the improvement of previous research on multi 3G services for broadband lines. Previous studies emphasized on accessing and downloading information files from Public Cloud residing Web Servers. The demand is not only for speed but high reliability of access as well. High reliability will mean mitigating both direct and indirect high cost due to repeated attempts of uploading and downloading the large files. Nomadic and mobile computer users need viable solution. Following solution for downloading information has been proposed and tested. The solution is promising. The result is now extended to providing reliable access line by means of redundancy and automatic reconfiguration for uploading and downloading large information files to a Web Server in the Cloud. The technique is taking advantage of internal load balancing feature to provision a redundant line acting as a backup line. A router that has the ability to provide load balancing to several WAN lines is chosen. The WAN lines are constructed using multiple 3G lines. The router supports the accessing Internet with more than one 3G access line which increases the reliability and availability of the Internet access as the second line immediately takes over if the first line is disturbed.

  7. Study of data on the associated momentum on the trigger side in high p hadron production

    International Nuclear Information System (INIS)

    Alonso, J.L.; Antolin, J.; Azeoiti, V.; Bravo, J.R.; Cruz, A.; Zaragoza Univ.

    1980-01-01

    The British-French Scandinavian collaboration has recently studied the non trigger charged mean momentum in different rapidity regions on the trigger hemisphere, (psub(x)), in the collision of two hadrons at the CERN Intersecting Storing Rings (ISR). In particular, they give for the rapidity regions y < 0,5 and y < 1 the values of the slope, α, of (psub(x)) with the trigger momentum psup(t)sub(T). Several authors have analysed those values of α in the framework of hard scattering models which predict values independent of psup(t)sub(T) for (zsub(c)), the longitudinal momentum fraction of the outgoing hard scattered system taken by the trigger. From this analysis they give estimates of (zsub(c)) of very difficult reconcilliation with those calculated in the Feynman, Field and Fox hard scattering model or in the QCD treatment of high psub(T) hardon production. The authors of the present paper have looked for, and found, other data whose model independent analysis in more feasible than that of the data mentioned above. More specifically, we analyse in the framework of the hard scattering models, but otherwise model independently, data on (psub(x)) in two other rapidity regions ( y < 3, 2 < y < 3) and find that consistence of the average slopes, α, in these two regions is only achieved with mean values of (zsub(c)) significantly increasing with psup(t)sub(T) and close in value to those obtained by Feynman et al. (orig.)

  8. Reliability of a Computerized Neurocognitive Test in Baseline Concussion Testing of High School Athletes.

    Science.gov (United States)

    MacDonald, James; Duerson, Drew

    2015-07-01

    Baseline assessments using computerized neurocognitive tests are frequently used in the management of sport-related concussions. Such testing is often done on an annual basis in a community setting. Reliability is a fundamental test characteristic that should be established for such tests. Our study examined the test-retest reliability of a computerized neurocognitive test in high school athletes over 1 year. Repeated measures design. Two American high schools. High school athletes (N = 117) participating in American football or soccer during the 2011-2012 and 2012-2013 academic years. All study participants completed 2 baseline computerized neurocognitive tests taken 1 year apart at their respective schools. The test measures performance on 4 cognitive tasks: identification speed (Attention), detection speed (Processing Speed), one card learning accuracy (Learning), and one back speed (Working Memory). Reliability was assessed by measuring the intraclass correlation coefficient (ICC) between the repeated measures of the 4 cognitive tasks. Pearson and Spearman correlation coefficients were calculated as a secondary outcome measure. The measure for identification speed performed best (ICC = 0.672; 95% confidence interval, 0.559-0.760) and the measure for one card learning accuracy performed worst (ICC = 0.401; 95% confidence interval, 0.237-0.542). All tests had marginal or low reliability. In a population of high school athletes, computerized neurocognitive testing performed in a community setting demonstrated low to marginal test-retest reliability on baseline assessments 1 year apart. Further investigation should focus on (1) improving the reliability of individual tasks tested, (2) controlling for external factors that might affect test performance, and (3) identifying the ideal time interval to repeat baseline testing in high school athletes. Computerized neurocognitive tests are used frequently in high school athletes, often within a model of baseline testing

  9. NOMAD Trigger Studies

    International Nuclear Information System (INIS)

    Varvell, K.

    1995-01-01

    The author reports on the status of an offline study of the NOMAD triggers, which has several motivations. Of primary importance is to demonstrate, using offline information recorded by the individual subdetectors comprising NOMAD, that the online trigger system is functioning as expected. Such an investigation serves to complement the extensive monitoring which is already carried out online. More specific to the needs of the offline software and analysis, the reconstruction of tracks and vertices in the detector requires some knowledge of the time at which the trigger has occurred, in order to locate relevant hits in the drift chambers and muon chambers in particular. The fact that the different triggers allowed by the MIOTRINO board take varying times to form complicates this task. An offline trigger algorithm may serve as a tool to shed light on situations where the online trigger status bits have not been recorded correctly, as happens in a small number of cases, or as an aid to studies with the aim of further refinement of the online triggers themselves

  10. Precision tracking at high background rates with the ATLAS muon spectrometer

    CERN Document Server

    Hertenberger, Ralf; The ATLAS collaboration

    2012-01-01

    Since start of data taking the ATLAS muon spectrometer performs according to specification. End of this decade after the luminosity upgrade of LHC by a factor of ten the proportionally increasing background rates require the replacement of the detectors in the most forward part of the muon spectrometer to ensure high quality muon triggering and tracking at background hit rates of up to 15,kHz/cm$^2$. Square meter sized micromegas detectors together with improved thin gap trigger detectors are suggested as replacement. Micromegas detectors are intrinsically high rate capable. A single hit spatial resolution below 40,$mu$m has been shown for 250,$mu$m anode strip pitch and perpendicular incidence of high energy muons or pions. The ongoing development of large micromegas structures and their investigation under non-perpendicular incidence or in high background environments requires precise and reliable monitoring of muon tracks. A muon telescope consisting of six small micromegas works reliably and is presently ...

  11. The ATLAS Trigger system upgrade and performance in Run 2

    CERN Document Server

    Shaw, Savanna Marie; The ATLAS collaboration

    2017-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first part of the LHC Run-2 in 2015/16 at a centre-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. In order to prepare for the anticipated further luminosity increase of the LHC in 2017/18, improving the trigger performance remain...

  12. The ATLAS Trigger Algorithms Upgrade and Performance in Run-2

    CERN Document Server

    Bernius, Catrin; The ATLAS collaboration

    2017-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a center-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. At the Level-1 trigger the undertaken improvements resulted in more pile-up robust selection efficiencies and event ra...

  13. The ATLAS Trigger system upgrade and performance in Run 2

    CERN Document Server

    Shaw, Savanna Marie; The ATLAS collaboration

    2018-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a centre-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. At the Level-1 trigger the undertaken improvements resulted in more pile-up robust selection efficiencies and event ra...

  14. Reliability Analysis of the CERN Radiation Monitoring Electronic System CROME

    CERN Document Server

    AUTHOR|(CDS)2126870

    For the new in-house developed CERN Radiation Monitoring Electronic System (CROME) a reliability analysis is necessary to ensure compliance with the statu-tory requirements regarding the Safety Integrity Level. The required Safety Integrity Level by IEC 60532 standard is SIL 2 (for the Safety Integrated Functions Measurement, Alarm Triggering and Interlock Triggering). The first step of the reliability analysis was a system and functional analysis which served as basis for the implementation of the CROME system in the software “Iso-graph”. In the “Prediction” module of Isograph the failure rates of all components were calculated. Failure rates for passive components were calculated by the Military Standard 217 and failure rates for active components were obtained from lifetime tests by the manufacturers. The FMEA was carried out together with the board designers and implemented in the “FMECA” module of Isograph. The FMEA served as basis for the Fault Tree Analysis and the detection of weak points...

  15. Can mine tremors be predicted? Observational studies of earthquake nucleation, triggering and rupture in South African mines

    CSIR Research Space (South Africa)

    Durrheim, RJ

    2012-05-01

    Full Text Available Earthquakes, and the tsunamis and landslides they trigger, pose a serious risk to people living close to plate boundaries, and a lesser but still significant risk to inhabitants of stable continental regions where destructive earthquakes are rare... of experiments that seek to identify reliable precursors of damaging seismic events. 1. Introduction Earthquakes, and the tsunamis and landslides they trigger, pose a serious risk to people living close to plate boundaries, and a lesser but still significant...

  16. Triggering the Chemical Instability of an Ionic Liquid under High Pressure.

    Science.gov (United States)

    Faria, Luiz F O; Nobrega, Marcelo M; Temperini, Marcia L A; Bini, Roberto; Ribeiro, Mauro C C

    2016-09-01

    Ionic liquids are an interesting class of materials due to their distinguished properties, allowing their use in an impressive range of applications, from catalysis to hypergolic fuels. However, the reactivity triggered by the application of high pressure can give rise to a new class of materials, which is not achieved under normal conditions. Here, we report on the high-pressure chemical instability of the ionic liquid 1-allyl-3-methylimidazolium dicyanamide, [allylC1im][N(CN)2], probed by both Raman and IR techniques and supported by quantum chemical calculations. Our results show a reaction occurring above 8 GPa, involving the terminal double bond of the allyl group, giving rise to an oligomeric product. The results presented herein contribute to our understanding of the stability of ionic liquids, which is of paramount interest for engineering applications. Moreover, gaining insight into this peculiar kind of reactivity could lead to the development of new or alternative synthetic routes to achieve, for example, poly(ionic liquids).

  17. Combining triggers in HEP data analysis

    International Nuclear Information System (INIS)

    Lendermann, Victor; Herbst, Michael; Krueger, Katja; Schultz-Coulon, Hans-Christian; Stamen, Rainer; Haller, Johannes

    2009-01-01

    Modern high-energy physics experiments collect data using dedicated complex multi-level trigger systems which perform an online selection of potentially interesting events. In general, this selection suffers from inefficiencies. A further loss of statistics occurs when the rate of accepted events is artificially scaled down in order to meet bandwidth constraints. An offline analysis of the recorded data must correct for the resulting losses in order to determine the original statistics of the analysed data sample. This is particularly challenging when data samples recorded by several triggers are combined. In this paper we present methods for the calculation of the offline corrections and study their statistical performance. Implications on building and operating trigger systems are discussed. (orig.)

  18. Combining triggers in HEP data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lendermann, Victor; Herbst, Michael; Krueger, Katja; Schultz-Coulon, Hans-Christian; Stamen, Rainer [Heidelberg Univ. (Germany). Kirchhoff-Institut fuer Physik; Haller, Johannes [Hamburg Univ. (Germany). Institut fuer Experimentalphysik

    2009-01-15

    Modern high-energy physics experiments collect data using dedicated complex multi-level trigger systems which perform an online selection of potentially interesting events. In general, this selection suffers from inefficiencies. A further loss of statistics occurs when the rate of accepted events is artificially scaled down in order to meet bandwidth constraints. An offline analysis of the recorded data must correct for the resulting losses in order to determine the original statistics of the analysed data sample. This is particularly challenging when data samples recorded by several triggers are combined. In this paper we present methods for the calculation of the offline corrections and study their statistical performance. Implications on building and operating trigger systems are discussed. (orig.)

  19. Fast processor for dilepton triggers

    International Nuclear Information System (INIS)

    Katsanevas, S.; Kostarakis, P.; Baltrusaitis, R.

    1983-01-01

    We describe a fast trigger processor, developed for and used in Fermilab experiment E-537, for selecting high-mass dimuon events produced by negative pions and anti-protons. The processor finds candidate tracks by matching hit information received from drift chambers and scintillation counters, and determines their momenta. Invariant masses are calculated for all possible pairs of tracks and an event is accepted if any invariant mass is greater than some preselectable minimum mass. The whole process, accomplished within 5 to 10 microseconds, achieves up to a ten-fold reduction in trigger rate

  20. The ATLAS Trigger System: Ready for Run II

    CERN Document Server

    Czodrowski, Patrick; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger system has been used successfully for data collection in the 2009-2013 Run 1 operation cycle of the CERN Large Hadron Collider (LHC) at center-of-mass energies of up to 8 TeV. With the restart of the LHC for the new Run 2 data-taking period at 13 TeV, the trigger rates are expected to rise by approximately a factor of 5. The trigger system consists of a hardware-based first level (L1) and a software-based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of ~ 1kHz. This presentation will give an overview of the upgrades to the ATLAS trigger system that have been implemented during the LHC shutdown period in order to deal with the increased trigger rates while efficiently selecting the physics processes of interest. These upgrades include changes to the L1 calorimeter trigger, the introduction of a new L1 topological trigger module, improvements in the L1 muon system, and the merging of the previously two-level HLT ...

  1. The ATLAS Trigger System: Ready for Run-2

    CERN Document Server

    Nakahama, Yu; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the next data-taking period starting in early 2015 (Run-2) the LHC will operate at a centre-of-mass energy of about 13 TeV resulting in roughly five times higher trigger rates. We will review the upgrades to the ATLAS Trigger system that have been implemented during the shutdown and that will allow us to cope with these increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter trigger, the introduction of a new L1 topological trigger module, improvements in the L1 muon system and the merging of the prev...

  2. The ATLAS Trigger System: Ready for Run-2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00211007; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the next data-taking period starting in 2015 (Run-2) the LHC will operate at a centre-of-mass energy of about 13 TeV resulting in roughly five times higher trigger rates. We will briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter and muon trigger system, the introduction of a new L1 topological trigger module and the merging of the previously two-level HLT system into a single event filter fa...

  3. The D0 run II trigger system

    International Nuclear Information System (INIS)

    Schwienhorst, Reinhard; Michigan State U.

    2004-01-01

    The D0 detector at the Fermilab Tevatron was upgraded for Run II. This upgrade included improvements to the trigger system in order to be able to handle the increased Tevatron luminosity and higher bunch crossing rates compared to Run I. The D0 Run II trigger is a highly exible system to select events to be written to tape from an initial interaction rate of about 2.5 MHz. This is done in a three-tier pipelined, buffered system. The first tier (level 1) processes fast detector pick-off signals in a hardware/firmware based system to reduce the event rate to about 1. 5kHz. The second tier (level 2) uses information from level 1 and forms simple Physics objects to reduce the rate to about 850 Hz. The third tier (level 3) uses full detector readout and event reconstruction on a filter farm to reduce the rate to 20-30 Hz. The D0 trigger menu contains a wide variety of triggers. While the emphasis is on triggering on generic lepton and jet final states, there are also trigger terms for specific final state signatures. In this document we describe the D0 trigger system as it was implemented and is currently operating in Run II

  4. Instrumentation of a Level-1 Track Trigger in the ATLAS detector for the High Luminosity LHC

    CERN Document Server

    Boisvert, V; The ATLAS collaboration

    2012-01-01

    The Large Hadron Collider will be upgraded in order to reach an instantaneous luminosity of $L=5 \\times 10^{34}$ cm$^{-2}$ s$^{-1}$. A challenge for the detectors will be to cope with the excessive rate of events coming into the trigger system. In order to maintain the capability of triggering on single lepton objects with momentum thresholds of $p_T 25$ GeV, the ATLAS detector is planning to use tracking information at the Level-1 (hardware) stage of the trigger system. Two options are currently being studied: a L0/L1 trigger design using a double buffer front-end architecture and a single hardware trigger level which uses trigger layers in the new tracker system. Both options are presented as well as results from simulation studies.

  5. Performance of the ATLAS muon trigger in run 2

    CERN Document Server

    Morgenstern, Marcus; The ATLAS collaboration

    2017-01-01

    Triggering on muons is a crucial ingredient to fulfill the physics program of the ATLAS experiments. The ATLAS trigger system deploys a two stage strategy, a hardware-based Level-1 trigger and a software-based high-level trigger to select events of interest at a suitable recording rate. Both stages underwent upgrades to cope with the challenges in run-II data-taking at centre-of-mass energies of 13 TeV and instantaneous luminosities up to 2x10$^{34} cm^{-2}s^{-1}$. The design of the ATLAS muon triggers and their performance in proton-proton collisions at 13 TeV are presented.

  6. Simulation of the ATLAS New Small Wheel Trigger Sysmtem

    CERN Document Server

    Saito, Tomoyuki; The ATLAS collaboration

    2017-01-01

    The instantaneous luminosity of the Large Hadron Collider (LHC) at CERN will be increased up to a factor of five with respect to the original design value to explore higher energy scale. In order to benefit from the expected high luminosity performance, the first station of the ATLAS muon end-cap Small Wheel system will be replaced by a New Small Wheel (NSW) detector. The NSW provide precise track segment information to the muon Level-1 trigger to reduce fake triggers. This contribution will summarize a detail of the NSW trigger decision system, track reconstruction algorithm implemented into the trigger processor and results of performance studies on the trigger system.

  7. Surgical Treatment of Trigger Finger: Open Release

    Directory of Open Access Journals (Sweden)

    Firat Ozan

    2016-01-01

    Full Text Available In this study, open A1 pulley release results were evaluated in patients with a trigger finger diagnosis. 45 patients (29 females, 16 males, mean age 50.7 ± 11.9; range (24-79, 45 trigger fingers were released via open surgical technique. On the 25 of 45 cases were involved in the right hand and 16 of them were at the thumb, 2 at index, 6 at the middle and 1 at ring finger. Similarly, at the left hand, 15 of 20 cases were at the thumb, 1 at the index finger, 2 at middle finger and 2 at ring finger. Average follow-up time was 10.2 ± 2.7 (range, 6-15 months. Comorbidities in patients were; diabetes mellitus at 6 cases (13.3%, hypertension at 11 cases (24.4%, hyperthyroidism at 2 cases (4.4%, dyslipidemia at 2 cases (4.4% and lastly 2 cases had carpal tunnel syndrome operation. The mean time between the onset of symptoms to surgery was 6.9 ± 4.8 (range, 2-24 months. Patient satisfaction was very good in 34 cases (75.4% and good in 11 (24.6% patients. The distance between the pulpa of the operated finger and the palm was normal in every case postoperatively. We have not encountered any postoperative complications. We can recommend that; A1 pulley release via open incision is an effective and reliable method in trigger finger surgery.

  8. Development of High Level Trigger Software for Belle II at SuperKEKB

    International Nuclear Information System (INIS)

    Lee, S; Itoh, R; Katayama, N; Mineo, S

    2011-01-01

    The Belle collaboration has been trying for 10 years to reveal the mystery of the current matter-dominated universe. However, much more statistics is required to search for New Physics through quantum loops in decays of B mesons. In order to increase the experimental sensitivity, the next generation B-factory, SuperKEKB, is planned. The design luminosity of SuperKEKB is 8 x 10 35 cm −2 s −1 a factor 40 above KEKB's peak luminosity. At this high luminosity, the level 1 trigger of the Belle II experiment will stream events of 300 kB size at a 30 kHz rate. To reduce the data flow to a manageable level, a high-level trigger (HLT) is needed, which will be implemented using the full offline reconstruction on a large scale PC farm. There, physics level event selection is performed, reducing the event rate by ∼ 10 to a few kHz. To execute the reconstruction the HLT uses the offline event processing framework basf2, which has parallel processing capabilities used for multi-core processing and PC clusters. The event data handling in the HLT is totally object oriented utilizing ROOT I/O with a new method of object passing over the UNIX socket connection. Also under consideration is the use of the HLT output as well to reduce the pixel detector event size by only saving hits associated with a track, resulting in an additional data reduction of ∼ 100 for the pixel detector. In this contribution, the design and implementation of the Belle II HLT are presented together with a report of preliminary testing results.

  9. Scyllac equipment reliability analysis

    International Nuclear Information System (INIS)

    Gutscher, W.D.; Johnson, K.J.

    1975-01-01

    Most of the failures in Scyllac can be related to crowbar trigger cable faults. A new cable has been designed, procured, and is currently undergoing evaluation. When the new cable has been proven, it will be worked into the system as quickly as possible without causing too much additional down time. The cable-tip problem may not be easy or even desirable to solve. A tightly fastened permanent connection that maximizes contact area would be more reliable than the plug-in type of connection in use now, but it would make system changes and repairs much more difficult. The balance of the failures have such a low occurrence rate that they do not cause much down time and no major effort is underway to eliminate them. Even though Scyllac was built as an experimental system and has many thousands of components, its reliability is very good. Because of this the experiment has been able to progress at a reasonable pace

  10. Leadership in organizations with high security and reliability requirements

    International Nuclear Information System (INIS)

    Gonzalez, F.

    2013-01-01

    Developing leadership skills in organizations is the key to ensure the sustainability of excellent results in industries with high requirements safety and reliability. In order to have a model of leadership development specific to this type of organizations, Tecnatom in 2011, we initiated a project internal, to find and adapt a competency model to these requirements.

  11. Trigger finger

    Science.gov (United States)

    ... digit; Trigger finger release; Locked finger; Digital flexor tenosynovitis ... cut or hand Yellow or green drainage from the cut Hand pain or discomfort Fever If your trigger finger returns, call your surgeon. You may need another surgery.

  12. The CMS Barrel Muon Trigger Upgrade

    CERN Document Server

    Triossi, Andrea

    2017-01-01

    ABSTRACT: The increase of luminosity expected by LHC during Phase 1 will impose several constrains for rate reduction while maintaining high efficiency in the CMS Level 1 trigger system. The TwinMux system is the early layer of the muon barrel region that concentrates the information from different subdetectors DT, RPC and HO. It arranges and fan-out the slow optical trigger links from the detector chambers into faster links (10 Gbps) that are sent to the track finders. Results, from collision runs, that confirm the satisfactory operation of the trigger system up to the output of the barrel track finder, will be shown. SUMMARY: In view of the increase of luminosity during phase 1 upgrade of LHC, the muon trigger chain of the Compact Muon Solenoid (CMS) experiment underwent considerable improvements. The muon detector was designed for preserving the complementarity and redundancy of three separate muon detection systems, Cathode Strip Chambers (CSC), Drift Tubes (DT) and Resistive Plate Chambers (RPC), until ...

  13. A Time-Multiplexed Track-Trigger architecture for CMS

    CERN Document Server

    Hall, Geoffrey; Pesaresi, Mark Franco; Rose, A

    2014-01-01

    The CMS Tracker under development for the High Luminosity LHC includes an outer tracker based on ``PT-modules'' which will provide track stubs based on coincident clusters in two closely spaced sensor layers, aiming to reject low transverse momentum track hits before data transmission to the Level-1 trigger. The tracker data will be used to reconstruct track segments in dedicated processors before onward transmission to other trigger processors which will combine tracker information with data originating from the calorimeter and muon detectors, to make the final L1 trigger decision. The architecture for processing the tracker data is still an open question. One attractive option is to explore a Time Multiplexed design similar to one which is currently being implemented in the CMS calorimeter trigger as part of the Phase I trigger upgrade. The Time Multiplexed Trigger concept is explained, the potential benefits of applying it for processing future tracker data are described and a possible design based on cur...

  14. VLSI-based video event triggering for image data compression

    Science.gov (United States)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  15. A Fast hardware tracker for the ATLAS Trigger

    CERN Document Server

    Pandini, Carlo Enrico; The ATLAS collaboration

    2015-01-01

    The trigger system at the ATLAS experiment is designed to lower the event rate occurring from the nominal bunch crossing at 40 MHz to about 1 kHz for a designed LHC luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. To achieve high background rejection while maintaining good efficiency for interesting physics signals, sophisticated algorithms are needed which require extensive use of tracking information. The Fast TracKer (FTK) trigger system, part of the ATLAS trigger upgrade program, is a highly parallel hardware device designed to perform track-finding at 100 kHz and based on a mixture of advanced technologies. Modern, powerful Field Programmable Gate Arrays (FPGA) form an important part of the system architecture, and the combinatorial problem of pattern recognition is solved by ~8000 standard-cell ASICs named Associative Memories. The availability of the tracking and subsequent vertex information within a short latency ensures robust selections and allows improved trigger performance for the most difficult sign...

  16. An improved trigger-generation scheme for Cerenkov imaging cameras [Paper No.: I5

    International Nuclear Information System (INIS)

    Bhat, C.L.; Tickoo, A.K.; Kaul, I.K.; Koul, R.

    1993-01-01

    An improved trigger-generation scheme for TeV gamma-ray imaging telescopes is proposed. Based on a memory-based majority coincidence circuit, this scheme involves deriving 3-pixel nearest-neighbor coincidences as against the conventional approach of generating prompt coincidence from any 2 pixel of the imaging-camera. As such the new method discriminates against shot-noise-generated triggers, and perhaps, to some extent against background cosmic-ray events also, without compromising on the telescope response to events of γ-ray origin. The net effect is that a Whipple-like imaging system can be operated with a comparatively higher sensitivity than what is possible at present. In addition, a suitably scaled-up value of the chance-trigger rate can be independently derived, thereby making it possible to use this parameter reliably for keeping a log of the 'health' of the experimental system. (author). 9 refs., 5 figs

  17. The design and simulated performance of a fast Level 1 track trigger for the ATLAS High Luminosity Upgrade

    CERN Document Server

    Martensson, Mikael; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment at the High Luminosity LHC will face a fivefold increase in the number of interactions per bunch crossing relative to the ongoing Run 2. This will require a proportional improvement in rejection power at the earliest levels of the detector trigger system, while preserving good signal efficiency. One critical aspect of this improvement will be the implementation of precise track reconstruction, through which sharper trigger turn-on curves can be achieved, and b-tagging and tau-tagging techniques can in principle be implemented. The challenge of such a project comes in the development of a fast, custom electronic device integrated in the hardware based first trigger level of the experiment. This article will discuss the requirements, architecture and projected performance of the system in terms of tracking, timing and physics, based on detailed simulations. Studies are carried out using data from the strip subsystem only or both strip and pixel subsystems.

  18. The design and simulated performance of a fast Level 1 track trigger for the ATLAS High Luminosity Upgrade

    CERN Document Server

    Martensson, Mikael; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment at the high-luminosity LHC will face a five-fold increase in the number of interactions per collision relative to the ongoing Run 2. This will require a proportional improvement in rejection power at the earliest levels of the detector trigger system, while preserving good signal efficiency. One critical aspect of this improvement will be the implementation of precise track reconstruction, through which sharper trigger turn-on curves can be achieved, and b-tagging and tau-tagging techniques can in principle be implemented. The challenge of such a project comes in the development of a fast, custom electronic device integrated in the hardware-based first trigger level of the experiment, with repercussions propagating as far as the detector read-out philosophy. This talk will discuss the requirements, architecture and projected performance of the system in terms of tracking, timing and physics, based on detailed simulations. Studies are carried out comparing two detector geometries and using...

  19. Multi-Agent System based Event-Triggered Hybrid Controls for High-Security Hybrid Energy Generation Systems

    DEFF Research Database (Denmark)

    Dou, Chun-Xia; Yue, Dong; Guerrero, Josep M.

    2017-01-01

    This paper proposes multi-agent system based event- triggered hybrid controls for guaranteeing energy supply of a hybrid energy generation system with high security. First, a mul-ti-agent system is constituted by an upper-level central coordi-nated control agent combined with several lower......-level unit agents. Each lower-level unit agent is responsible for dealing with internal switching control and distributed dynamic regula-tion for its unit system. The upper-level agent implements coor-dinated switching control to guarantee the power supply of over-all system with high security. The internal...

  20. Surviving the Lead Reliability Engineer Role in High Unit Value Projects

    Science.gov (United States)

    Perez, Reinaldo J.

    2011-01-01

    A project with a very high unit value within a company is defined as a project where a) the project constitutes one of a kind (or two-of-a-kind) national asset type of project, b) very large cost, and c) a mission failure would be a very public event that will hurt the company's image. The Lead Reliability engineer in a high visibility project is by default involved in all phases of the project, from conceptual design to manufacture and testing. This paper explores a series of lessons learned, over a period of ten years of practical industrial experience by a Lead Reliability Engineer. We expand on the concepts outlined by these lessons learned via examples. The lessons learned are applicable to all industries.

  1. Design studies for the Double Chooz trigger

    International Nuclear Information System (INIS)

    Cucoanes, Andi Sebastian

    2009-01-01

    The main characteristic of the neutrino mixing effect is assumed to be the coupling between the flavor and the mass eigenstates. Three mixing angles (θ 12 , θ 23 , θ 13 ) are describing the magnitude of this effect. Still unknown, θ 13 is considered very small, based on the measurement done by the CHOOZ experiment. A leading experiment will be Double Chooz, placed in the Ardennes region, on the same site as used by CHOOZ. The Double Chooz goal is the exploration of ∝80% from the currently allowed θ 13 region, by searching the disappearance of reactor antineutrinos. Double Chooz will use two similar detectors, located at different distances from the reactor cores: a near one at ∝150 m where no oscillations are expected and a far one at 1.05 km distance, close to the first minimum of the survival probability function. The measurement foresees a precise comparison of neutrino rates and spectra between both detectors. The detection mechanism is based on the inverse β-decay. The Double Chooz detectors have been designed to minimize the rate of random background. In a simplified view, two optically separated regions are considered. The target, filled with Gd-doped liquid scintillator, is the main antineutrino interaction volume. Surrounding the target, the inner veto region aims to tag the cosmogenic muon background which hits the detector. Both regions are viewed by photomultipliers. The Double Chooz trigger system has to be highly efficient for antineutrino events as well as for several types of background. The trigger analyzes discriminated signals from the central region and the inner veto photomultipliers. The trigger logic is fully programmable and can combine the input signals. The trigger conditions are based on the total energy released in event and on the PMT groups multiplicity. For redundancy, two independent trigger boards will be used for the central region, each of them receiving signals from half of the photomultipliers. A third trigger board

  2. Excimer lamp pumped by a triggered discharge

    Energy Technology Data Exchange (ETDEWEB)

    Baldacchini, G.; Bollanti, S.; Di Lazzaro, P.; Flora, F.; Giordano, G.; Letardi, T.; Renieri, A.; Schina, G. [ENEA, Centro Ricerche Frascati, Rome (Italy). Dip. Innovazione; Clementi, G.; Muzzi, F.; Zheng, C.E. [EL.EN. (Electronic Engineering), Florence (Italy)

    1996-11-01

    Radiation characteristics and discharge performances of an excimer lamp are described. The discharge of the HCl/Xe gas mixture at an atmospheric pressure, occurring near the quartz tube wall, is initiated by a trigger wire. A maximum total UV energy of about 0.4 J in a (0.8-0.9) {mu}s pulse, radiated from a 10 cm discharge length, is obtained with a total discharge input energy of 8 J. Excimer lamps are the preferred choice for medical and material processing irradiations, when the monochromaticity or coherence of UV light is not required, due to their low cost, reliability and easy maintenance.

  3. Reliability engineering for nuclear and other high technology systems

    International Nuclear Information System (INIS)

    Lakner, A.A.; Anderson, R.T.

    1985-01-01

    This book is written for the reliability instructor, program manager, system engineer, design engineer, reliability engineer, nuclear regulator, probability risk assessment (PRA) analyst, general manager and others who are involved in system hardware acquisition, design and operation and are concerned with plant safety and operational cost-effectiveness. It provides criteria, guidelines and comprehensive engineering data affecting reliability; it covers the key aspects of system reliability as it relates to conceptual planning, cost tradeoff decisions, specification, contractor selection, design, test and plant acceptance and operation. It treats reliability as an integrated methodology, explicitly describing life cycle management techniques as well as the basic elements of a total hardware development program, including: reliability parameters and design improvement attributes, reliability testing, reliability engineering and control. It describes how these elements can be defined during procurement, and implemented during design and development to yield reliable equipment. (author)

  4. Highly-reliable operation of 638-nm broad stripe laser diode with high wall-plug efficiency for display applications

    Science.gov (United States)

    Yagi, Tetsuya; Shimada, Naoyuki; Nishida, Takehiro; Mitsuyama, Hiroshi; Miyashita, Motoharu

    2013-03-01

    Laser based displays, as pico to cinema laser projectors have gathered much attention because of wide gamut, low power consumption, and so on. Laser light sources for the displays are operated mainly in CW, and heat management is one of the big issues. Therefore, highly efficient operation is necessitated. Also the light sources for the displays are requested to be highly reliable. 638 nm broad stripe laser diode (LD) was newly developed for high efficiency and highly reliable operation. An AlGaInP/GaAs red LD suffers from low wall plug efficiency (WPE) due to electron overflow from an active layer to a p-cladding layer. Large optical confinement factor (Γ) design with AlInP cladding layers is adopted to improve the WPE. The design has a disadvantage for reliable operation because the large Γ causes high optical density and brings a catastrophic optical degradation (COD) at a front facet. To overcome the disadvantage, a window-mirror structure is also adopted in the LD. The LD shows WPE of 35% at 25°C, highest record in the world, and highly stable operation at 35°C, 550 mW up to 8,000 hours without any catastrophic optical degradation.

  5. Autonomous stimulus triggered self-healing in smart structural composites

    International Nuclear Information System (INIS)

    Norris, C J; White, J A P; McCombe, G; Chatterjee, P; Bond, I P; Trask, R S

    2012-01-01

    Inspired by the ability of biological systems to sense and autonomously heal damage, this research has successfully demonstrated the first autonomous, stimulus triggered, self-healing system in a structural composite material. Both the sensing and healing mechanisms are reliant on microvascular channels incorporated within a laminated composite material. For the triggering mechanism, a single air filled vessel was pressurized, sealed and monitored. Upon drop weight impact (10 J), delamination and microcrack connectivity between the pressurized vessel and those open to ambient led to a pressure loss which, with the use of a suitable sensor, triggered a pump to deliver a healing agent to the damage zone. Using this autonomous healing approach, near full recovery of post-impact compression strength was achieved (94% on average). A simplified alternative system with healing agent continuously flowing through the vessels, akin to blood flow, was found to offer 100% recovery of the material’s virgin strength. Optical microscopy and ultrasonic C-scanning provided further evidence of large-scale infusion of matrix damage with the healing agent. The successful implementation of this bioinspired technology could substantially enhance the integrity and reliability of aerospace structures, whilst offering benefits through improved performance/weight ratios and extended lifetimes. (paper)

  6. High reliability fuel in the US

    International Nuclear Information System (INIS)

    Neuhold, R.J.; Leggett, R.D.; Walters, L.C.; Matthews, R.B.

    1986-05-01

    The fuels development program of the United States is described for liquid metal reactors (LMR's). The experience base, status and future potential are discussed for the three systems - oxide, metal and carbide - that have proved to have high reliability. Information is presented showing burnup capability of the oxide fuel system in a large core, e.g., FFTF, to be 150 MWd/kgM with today's technology with the potential for a capability as high as 300 MWd/kgM. Data provided for the metal fuel system show 8 at. % being routinely achieved as the EBR-II driver fuel with good potential for extending this to 15 at. % since special test pins have already exceeded this burnup level. The data included for the carbide fuel system are from pin and assembly irradiations in EBR-II and FFTF, respectively. Burnup to 12 at. % appears readily achievable with burnups to 20 at. % being demonstrated in a few pins. Efforts continue on all three systems with the bulk of the activity on metal and oxide

  7. Reliability of an Automated High-Resolution Manometry Analysis Program across Expert Users, Novice Users, and Speech-Language Pathologists

    Science.gov (United States)

    Jones, Corinne A.; Hoffman, Matthew R.; Geng, Zhixian; Abdelhalim, Suzan M.; Jiang, Jack J.; McCulloch, Timothy M.

    2014-01-01

    Purpose: The purpose of this study was to investigate inter- and intrarater reliability among expert users, novice users, and speech-language pathologists with a semiautomated high-resolution manometry analysis program. We hypothesized that all users would have high intrarater reliability and high interrater reliability. Method: Three expert…

  8. Trigger and electronics issues for scintillating fiber tracking

    International Nuclear Information System (INIS)

    Baumbaugh, A.E.

    1994-01-01

    Scintillating Fiber technology has made great advances and has demonstrated great promise for high speed charged particle tracking and triggering. The small detector sizes and fast scintillation floors available, make them very promising for use at high luminosity experiments at today's and tomorrow's colliding and fixed target experiments where high rate capability is essential. This paper will discuss some of the system aspects which should be considered by anyone attempting to design a scintillating fiber tracking system and high speed tracking trigger. As the reader will see, seemingly simple decisions can have far reaching effects on overall system performance

  9. Topological Trigger Developments

    CERN Multimedia

    Likhomanenko, Tatiana

    2015-01-01

    The main b-physics trigger algorithm used by the LHCb experiment is the so-called topological trigger. The topological trigger selects vertices which are a) detached from the primary proton-proton collision and b) compatible with coming from the decay of a b-hadron. In the LHC Run 1, this trigger utilized a custom boosted decision tree algorithm, selected an almost 100% pure sample of b-hadrons with a typical efficiency of 60-70%, and its output was used in about 60% of LHCb papers. This talk presents studies carried out to optimize the topological trigger for LHC Run 2. In particular, we have carried out a detailed comparison of various machine learning classifier algorithms, e.g., AdaBoost, MatrixNet and uBoost. The topological trigger algorithm is designed to select all "interesting" decays of b-hadrons, but cannot be trained on every such decay. Studies have therefore been performed to determine how to optimize the performance of the classification algorithm on decays not used in the training. These inclu...

  10. TileGap3 Correction in ATLAS Jet Triggers

    CERN Document Server

    Carmiggelt, Joris Jip

    2017-01-01

    Study done to correct for the excess of jets in the TileGap3 (TG3) region of the ATLAS detector. Online leading jet pt is scaled down proportional to its energy fraction in TG3. This study shows that such a correction is undesirable for high pt triggers, since it leads to a slow turn-on and thus high losses in triggerrates. For low pt triggers there seems to be some advantageous effects as counts are slightly reduced below the 95% efficiency point of the trigger. There is, however, a pay-off: An increase of missed counts above the 95% efficiency point due to an shifting of the turn-on curve. Suggestion for further research are made to compensate for this and optimise the correction.

  11. The Run-2 ATLAS Trigger System: Design, Performance and Plan

    CERN Document Server

    zur Nedden, Martin; The ATLAS collaboration

    2016-01-01

    In high-energy physics experiments, online selection is crucial to select interesting collisions from the large data volume. The ATLAS experiment at the Large Hadron Collider (LHC) utilizes the trigger system that consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT), reducing the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of about 1000 Hz. In the LHC Run-2 starting from in 2015, the LHC operates at centre-of-mass energy of 13 TeV providing a luminosity up to $1.2 \\cdot 10^{34} {\\rm cm^{-2}s^{-1}}$. The ATLAS trigger system has to cope with these challenges, while maintaining or even improving the efficiency to select relevant physics processes. In this paper, the ATLAS trigger system for LHC Run-2 is reviewed. Secondly, the impressive performance improvements in the HLT trigger algorithms used to identify leptons, hadrons and global event quantities like missing transverse energy is shown. Electron, muon and photon triggers covering trans...

  12. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Kimura, N; The ATLAS collaboration

    2012-01-01

    Selecting interesting events with triggering is very challenging at the LHC due to the busy hadronic environment. Starting in 2014 the LHC will run with an energy of 14TeV and instantaneous luminosities which could exceed 10^34 interactions per cm^2 and per second. The triggering in the ATLAS detector is realized using a three level trigger approach, in which the first level (L1) is hardware based and the second (L2) and third (EF) stag are realized using large computing farms. It is a crucial and non-trivial task for triggering to maintain a high efficiency for events of interest while suppressing effectively the very high rates of inclusive QCD process, which constitute mainly background. At the same time the trigger system has to be robust and provide sufficient operational margins to adapt to changes in the running environment. In the current design track reconstruction can be performed only in limited regions of interest at L2 and the CPU requirements may limit this even further at the highest instantane...

  13. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Kimura, N; The ATLAS collaboration

    2012-01-01

    Selecting interesting events with triggering is very challenging at the LHC due to the busy hadronic environment. Starting in 2014 the LHC will run with an energy of 13 or 14 TeV and instantaneous luminosities which could exceed 1034 interactions per cm2 and per second. The triggering in the ATLAS detector is realized using a three level trigger approach, in which the first level (Level-1) is hardware based and the second (Level-2) and third (EF) stag are realized using large computing farms. It is a crucial and non-trivial task for triggering to maintain a high efficiency for events of interest while suppressing effectively the very high rates of inclusive QCD process, which constitute mainly background. At the same time the trigger system has to be robust and provide sufficient operational margins to adapt to changes in the running environment. In the current design track reconstruction can be performed only in limited regions of interest at L2 and the CPU requirements may limit this even further at the hig...

  14. Lessons from (triggered) tremor

    Science.gov (United States)

    Gomberg, Joan

    2010-01-01

    I test a “clock-advance” model that implies triggered tremor is ambient tremor that occurs at a sped-up rate as a result of loading from passing seismic waves. This proposed model predicts that triggering probability is proportional to the product of the ambient tremor rate and a function describing the efficacy of the triggering wave to initiate a tremor event. Using data mostly from Cascadia, I have compared qualitatively a suite of teleseismic waves that did and did not trigger tremor with ambient tremor rates. Many of the observations are consistent with the model if the efficacy of the triggering wave depends on wave amplitude. One triggered tremor observation clearly violates the clock-advance model. The model prediction that larger triggering waves result in larger triggered tremor signals also appears inconsistent with the measurements. I conclude that the tremor source process is a more complex system than that described by the clock-advance model predictions tested. Results of this and previous studies also demonstrate that (1) conditions suitable for tremor generation exist in many tectonic environments, but, within each, only occur at particular spots whose locations change with time; (2) any fluid flow must be restricted to less than a meter; (3) the degree to which delayed failure and secondary triggering occurs is likely insignificant; and 4) both shear and dilatational deformations may trigger tremor. Triggered and ambient tremor rates correlate more strongly with stress than stressing rate, suggesting tremor sources result from time-dependent weakening processes rather than simple Coulomb failure.

  15. The upgrade of the LHCb trigger system

    CERN Document Server

    INSPIRE-00259834; Fitzpatrick, C.; Gligorov, V.; Raven, G.

    2014-10-20

    The LHCb experiment will operate at a luminosity of $2\\times10^{33}$ cm$^{-2}$s$^{-1}$ during LHC Run 3. At this rate the present readout and hardware Level-0 trigger become a limitation, especially for fully hadronic final states. In order to maintain a high signal efficiency the upgraded LHCb detector will deploy two novel concepts: a triggerless readout and a full software trigger.

  16. Using a neural network approach for muon reconstruction and triggering

    CERN Document Server

    Etzion, E; Abramowicz, H; Benhammou, Ya; Horn, D; Levinson, L; Livneh, R

    2004-01-01

    The extremely high rate of events that will be produced in the future Large Hadron Collider requires the triggering mechanism to take precise decisions in a few nano-seconds. We present a study which used an artificial neural network triggering algorithm and compared it to the performance of a dedicated electronic muon triggering system. Relatively simple architecture was used to solve a complicated inverse problem. A comparison with a realistic example of the ATLAS first level trigger simulation was in favour of the neural network. A similar architecture trained after the simulation of the electronics first trigger stage showed a further background rejection.

  17. Method of triggering the vacuum arc in source with a resistor

    International Nuclear Information System (INIS)

    Zheng Le; Lan Zhaohui; Long Jidong; Peng Yufei; Li Jie; Yang Zhen; Dong Pan; Shi Jinshui

    2014-01-01

    Background: The metal vapor vacuum arc (MEVVA) ion source is a common source which provides strong metal ion flow. To trigger this ion source, a high-voltage trigger pulse generator and a high-voltage isolation pulse transformer are needed, which makes the power supply system complex. Purpose: To simplify the power supply system, a trigger method with a resistor was introduced, and some characteristics of this method were studied. Methods: The ion flow provided by different main arc current was measured, as well as the trigger current. The main arc current and the ion current were recorded with different trigger resistances. Results: Experimental results showed that, within a certain range of resistances, the larger the resistance value, the more difficult it was to success fully trigger the source. Meanwhile, the main arc rising edge became slower on the increasing in the trigger time. However, the resistance value increment had hardly impact on the intensity of ion flow extracted in the end, The ion flow became stronger with the increasing main arc current. Conclusion: The power supply system of ion source is simplified by using the trigger method with a resistor. Only a suitable resistor was needed to complete the conversion process from trigger to arc initiating. (authors)

  18. The Central Trigger Processor (CTP)

    CERN Multimedia

    Franchini, Matteo

    2016-01-01

    The Central Trigger Processor (CTP) receives trigger information from the calorimeter and muon trigger processors, as well as from other sources of trigger. It makes the Level-1 decision (L1A) based on a trigger menu.

  19. High reliable and Real-time Data Communication Network Technology for Nuclear Power Plant

    International Nuclear Information System (INIS)

    Jeong, K. I.; Lee, J. K.; Choi, Y. R.; Lee, J. C.; Choi, Y. S.; Cho, J. W.; Hong, S. B.; Jung, J. E.; Koo, I. S.

    2008-03-01

    As advanced digital Instrumentation and Control (I and C) system of NPP(Nuclear Power Plant) are being introduced to replace analog systems, a Data Communication Network(DCN) is becoming the important system for transmitting the data generated by I and C systems in NPP. In order to apply the DCNs to NPP I and C design, DCNs should conform to applicable acceptance criteria and meet the reliability and safety goals of the system. As response time is impacted by the selected protocol, network topology, network performance, and the network configuration of I and C system, DCNs should transmit a data within time constraints and response time required by I and C systems to satisfy response time requirements of I and C system. To meet these requirements, the DCNs of NPP I and C should be a high reliable and real-time system. With respect to high reliable and real-time system, several reports and techniques having influences upon the reliability and real-time requirements of DCNs are surveyed and analyzed

  20. Validation of ATLAS L1 Topological Triggers

    CERN Document Server

    Praderio, Marco

    2017-01-01

    The Topological trigger (L1Topo) is a new component of the ATLAS L1 (Level-1) trigger. Its purpose is that of reducing the otherwise too high rate of data collection from the LHC by rejecting those events considered “uninteresting” (meaning that they have already been studied). This event rate reduction is achieved by applying topological requirements to the physical objects present in each event. It is very important to make sure that this trigger does not reject any “interesting” event. Therefore we need to verify its correct functioning. The goal of this summer student project is to study the response of two L1Topo algorithms (concerning ∆R and invariant mass). To do so I will compare the trigger decisions produced by the L1Topo hardware with the ones produced by the “official” L1Topo simulation. This way I will be able to identify events that could be incorrectly rejected. Simultaneously I will produce an emulation of these triggers that will help me understand the cause of disagreements bet...

  1. Triggered tremor sweet spots in Alaska

    Science.gov (United States)

    Gomberg, Joan; Prejean, Stephanie

    2013-01-01

    To better understand what controls fault slip along plate boundaries, we have exploited the abundance of seismic and geodetic data available from the richly varied tectonic environments composing Alaska. A search for tremor triggered by 11 large earthquakes throughout all of seismically monitored Alaska reveals two tremor “sweet spots”—regions where large-amplitude seismic waves repeatedly triggered tremor between 2006 and 2012. The two sweet spots locate in very different tectonic environments—one just trenchward and between the Aleutian islands of Unalaska and Akutan and the other in central mainland Alaska. The Unalaska/Akutan spot corroborates previous evidence that the region is ripe for tremor, perhaps because it is located where plate-interface frictional properties transition between stick-slip and stably sliding in both the dip direction and laterally. The mainland sweet spot coincides with a region of complex and uncertain plate interactions, and where no slow slip events or major crustal faults have been noted previously. Analyses showed that larger triggering wave amplitudes, and perhaps lower frequencies (tremor. However, neither the maximum amplitude in the time domain or in a particular frequency band, nor the geometric relationship of the wavefield to the tremor source faults alone ensures a high probability of triggering. Triggered tremor at the two sweet spots also does not occur during slow slip events visually detectable in GPS data, although slow slip below the detection threshold may have facilitated tremor triggering.

  2. REMOTE SENSING APPLICATIONS WITH HIGH RELIABILITY IN CHANGJIANG WATER RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    L. Ma

    2018-04-01

    Full Text Available Remote sensing technology has been widely used in many fields. But most of the applications cannot get the information with high reliability and high accuracy in large scale, especially for the applications using automatic interpretation methods. We have designed an application-oriented technology system (PIR composed of a series of accurate interpretation techniques,which can get over 85 % correctness in Water Resource Management from the view of photogrammetry and expert knowledge. The techniques compose of the spatial positioning techniques from the view of photogrammetry, the feature interpretation techniques from the view of expert knowledge, and the rationality analysis techniques from the view of data mining. Each interpreted polygon is accurate enough to be applied to the accuracy sensitive projects, such as the Three Gorge Project and the South - to - North Water Diversion Project. In this paper, we present several remote sensing applications with high reliability in Changjiang Water Resource Management,including water pollution investigation, illegal construction inspection, and water conservation monitoring, etc.

  3. Remote Sensing Applications with High Reliability in Changjiang Water Resource Management

    Science.gov (United States)

    Ma, L.; Gao, S.; Yang, A.

    2018-04-01

    Remote sensing technology has been widely used in many fields. But most of the applications cannot get the information with high reliability and high accuracy in large scale, especially for the applications using automatic interpretation methods. We have designed an application-oriented technology system (PIR) composed of a series of accurate interpretation techniques,which can get over 85 % correctness in Water Resource Management from the view of photogrammetry and expert knowledge. The techniques compose of the spatial positioning techniques from the view of photogrammetry, the feature interpretation techniques from the view of expert knowledge, and the rationality analysis techniques from the view of data mining. Each interpreted polygon is accurate enough to be applied to the accuracy sensitive projects, such as the Three Gorge Project and the South - to - North Water Diversion Project. In this paper, we present several remote sensing applications with high reliability in Changjiang Water Resource Management,including water pollution investigation, illegal construction inspection, and water conservation monitoring, etc.

  4. The Level 0 Trigger Processor for the NA62 experiment

    International Nuclear Information System (INIS)

    Chiozzi, S.; Gamberini, E.; Gianoli, A.; Mila, G.; Neri, I.; Petrucci, F.; Soldi, D.

    2016-01-01

    In the NA62 experiment at CERN, the intense flux of particles requires a high-performance trigger for the data acquisition system. A Level 0 Trigger Processor (L0TP) was realized, performing the event selection based on trigger primitives coming from sub-detectors and reducing the trigger rate from 10 to 1 MHz. The L0TP is based on a commercial FPGA device and has been implemented in two different solutions. The performance of the two systems are highlighted and compared.

  5. The Level 0 Trigger Processor for the NA62 experiment

    Energy Technology Data Exchange (ETDEWEB)

    Chiozzi, S. [INFN, Ferrara (Italy); Gamberini, E. [University of Ferrara and INFN, Ferrara (Italy); Gianoli, A. [INFN, Ferrara (Italy); Mila, G. [University of Turin and INFN, Turin (Italy); Neri, I., E-mail: neri@fe.infn.it [University of Ferrara and INFN, Ferrara (Italy); Petrucci, F. [University of Ferrara and INFN, Ferrara (Italy); Soldi, D. [University of Turin and INFN, Turin (Italy)

    2016-07-11

    In the NA62 experiment at CERN, the intense flux of particles requires a high-performance trigger for the data acquisition system. A Level 0 Trigger Processor (L0TP) was realized, performing the event selection based on trigger primitives coming from sub-detectors and reducing the trigger rate from 10 to 1 MHz. The L0TP is based on a commercial FPGA device and has been implemented in two different solutions. The performance of the two systems are highlighted and compared.

  6. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Neubauer, M; The ATLAS collaboration

    2009-01-01

    As the LHC luminosity is ramped up to the design level of 10^{34} cm^{-2} s^{-1} and beyond, the high rates, multiplicities, and energies of particles seen by the detectors will pose a unique challenge. Only a tiny fraction of the produced collisions can be stored on tape and immense real-time data reduction is needed. An effective trigger system must maintain high trigger efficiencies for the physics we are most interested in, and at the same time suppress the enormous QCD backgrounds. This requires massive computing power to minimize the online execution time of complex algorithms. A multi-level trigger is an effective solution for an otherwise impossible problem. The Fast Tracker (FTK) is a proposed upgrade to the ATLAS trigger system that will operate at full Level-1 output rates and provide high quality tracks reconstructed over the entire detector by the start of processing in Level-2. FTK solves the combinatorial challenge inherent to tracking by exploiting the massive parallelism of Associative Memori...

  7. Development, Validation and Integration of the ATLAS Trigger System Software in Run 2

    CERN Document Server

    Keyes, Robert; The ATLAS collaboration

    2016-01-01

    The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware and software, associated to various sub-detectors that must seamlessly cooperate in order to select 1 collision of interest out of every 40,000 delivered by the LHC every millisecond. This talk will discuss the challenges, workflow and organization of the ongoing trigger software development, validation and deployment. This development, from the top level integration and configuration to the individual components responsible for each sub system, is done to ensure that the most up to date algorithms are used to optimize the performance of the experiment. This optimization hinges on the reliability and predictability of the software performance, which is why validation is of the utmost importance. The software adheres to a hierarchical release structure, with newly validated releases propagating upwards. Integration tests are carried out on a daily basis to ensure that the releases deployed to the online trigger farm duri...

  8. LHCb : The LHCb trigger system and its upgrade

    CERN Multimedia

    Dziurda, Agnieszka

    2015-01-01

    The current LHCb trigger system consists of a hardware level, which reduces the LHC inelastic collision rate of 30 MHz to 1 MHz, at which the entire detector is read out. In a second level, implemented in a farm of 20k parallel-processing CPUs, the event rate is reduced to about 5 kHz. We review the performance of the LHCb trigger system during Run I of the LHC. Special attention is given to the use of multivariate analyses in the High Level Trigger. The major bottleneck for hadronic decays is the hardware trigger. LHCb plans a major upgrade of the detector and DAQ system in the LHC shutdown of 2018, enabling a purely software based trigger to process the full 30 MHz of inelastic collisions delivered by the LHC. We demonstrate that the planned architecture will be able to meet this challenge. We discuss the use of disk space in the trigger farm to buffer events while performing run-by-run detector calibrations, and the way this real time calibration and subsequent full event reconstruction will allow LHCb to ...

  9. Design studies for the Double Chooz trigger

    Energy Technology Data Exchange (ETDEWEB)

    Cucoanes, Andi Sebastian

    2009-07-24

    The main characteristic of the neutrino mixing effect is assumed to be the coupling between the flavor and the mass eigenstates. Three mixing angles ({theta}{sub 12}, {theta}{sub 23}, {theta}{sub 13}) are describing the magnitude of this effect. Still unknown, {theta}{sub 13} is considered very small, based on the measurement done by the CHOOZ experiment. A leading experiment will be Double Chooz, placed in the Ardennes region, on the same site as used by CHOOZ. The Double Chooz goal is the exploration of {proportional_to}80% from the currently allowed {theta}{sub 13} region, by searching the disappearance of reactor antineutrinos. Double Chooz will use two similar detectors, located at different distances from the reactor cores: a near one at {proportional_to}150 m where no oscillations are expected and a far one at 1.05 km distance, close to the first minimum of the survival probability function. The measurement foresees a precise comparison of neutrino rates and spectra between both detectors. The detection mechanism is based on the inverse {beta}-decay. The Double Chooz detectors have been designed to minimize the rate of random background. In a simplified view, two optically separated regions are considered. The target, filled with Gd-doped liquid scintillator, is the main antineutrino interaction volume. Surrounding the target, the inner veto region aims to tag the cosmogenic muon background which hits the detector. Both regions are viewed by photomultipliers. The Double Chooz trigger system has to be highly efficient for antineutrino events as well as for several types of background. The trigger analyzes discriminated signals from the central region and the inner veto photomultipliers. The trigger logic is fully programmable and can combine the input signals. The trigger conditions are based on the total energy released in event and on the PMT groups multiplicity. For redundancy, two independent trigger boards will be used for the central region, each of

  10. Readout and triggering of the Soudan 2 nucleon decay experiment

    International Nuclear Information System (INIS)

    Thron, J.L.

    1984-01-01

    The readout and triggering electronics for the Soudan 2 proton decay detector is presented. Pratically all the electronics is implemented in CMOS. The triggering scheme is highly flexible and software controllable

  11. Use of GPUs in Trigger Systems

    Science.gov (United States)

    Lamanna, Gianluca

    In recent years the interest for using graphics processor (GPU) in general purpose high performance computing is constantly rising. In this paper we discuss the possible use of GPUs to construct a fast and effective real time trigger system, both in software and hardware levels. In particular, we study the integration of such a system in the NA62 trigger. The first application of GPUs for rings pattern recognition in the RICH will be presented. The results obtained show that there are not showstoppers in trigger systems with relatively low latency. Thanks to the use of off-the-shelf technology, in continous development for purposes related to video game and image processing market, the architecture described would be easily exported to other experiments, to build a versatile and fully customizable online selection.

  12. Flexible event reconstruction software chains with the ALICE High-Level Trigger

    International Nuclear Information System (INIS)

    Ram, D; Breitner, T; Szostak, A

    2012-01-01

    The ALICE High-Level Trigger (HLT) has a large high-performance computing cluster at CERN whose main objective is to perform real-time analysis on the data generated by the ALICE experiment and scale it down to at-most 4GB/sec - which is the current maximum mass-storage bandwidth available. Data-flow in this cluster is controlled by a custom designed software framework. It consists of a set of components which can communicate with each other via a common control interface. The software framework also supports the creation of different configurations based on the detectors participating in the HLT. These configurations define a logical data processing “chain” of detector data-analysis components. Data flows through this software chain in a pipelined fashion so that several events can be processed at the same time. An instance of such a chain can run and manage a few thousand physics analysis and data-flow components. The HLT software and the configuration scheme used in the 2011 heavy-ion runs of ALICE, has been discussed in this contribution.

  13. Error Recovery in the Time-Triggered Paradigm with FTT-CAN.

    Science.gov (United States)

    Marques, Luis; Vasconcelos, Verónica; Pedreiras, Paulo; Almeida, Luís

    2018-01-11

    Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots.

  14. Roads at risk - the impact of debris flows on road network reliability and vulnerability in southern Norway

    Science.gov (United States)

    Meyer, Nele Kristin; Schwanghart, Wolfgang; Korup, Oliver

    2014-05-01

    Norwegian's road network is frequently affected by debris flows. Both damage repair and traffic interruption generate high economic losses and necessitate a rigorous assessment of where losses are expected to be high and where preventive measures should be focused on. In recent studies, we have developed susceptibility and trigger probability maps that serve as input into a hazard calculation at the scale of first-order watersheds. Here we combine these results with graph theory to assess the impact of debris flows on the road network of southern Norway. Susceptibility and trigger probability are aggregated for individual road sections to form a reliability index that relates to the failure probability of a link that connects two network vertices, e.g., road junctions. We define link vulnerability as a function of traffic volume and additional link failure distance. Additional link failure distance is the extra length of the alternative path connecting the two associated link vertices in case the network link fails and is calculated by a shortest-path algorithm. The product of network reliability and vulnerability indices represent the risk index. High risk indices identify critical links for the Norwegian road network and are investigated in more detail. Scenarios demonstrating the impact of single or multiple debris flow events are run for the most important routes between seven large cities in southern Norway. First results show that the reliability of the road network is lowest in the central and north-western part of the study area. Road network vulnerability is highest in the mountainous regions in central southern Norway where the road density is low and in the vicinity of cities where the traffic volume is large. The scenarios indicate that city connections that have their shortest path via routes crossing the central part of the study area have the highest risk of route failure.

  15. Reliability studies of a high-power proton accelerator for accelerator-driven system applications for nuclear waste transmutation

    Energy Technology Data Exchange (ETDEWEB)

    Burgazzi, Luciano [ENEA-Centro Ricerche ' Ezio Clementel' , Advanced Physics Technology Division, Via Martiri di Monte Sole, 4, 40129 Bologna (Italy)]. E-mail: burgazzi@bologna.enea.it; Pierini, Paolo [INFN-Sezione di Milano, Laboratorio Acceleratori e Superconduttivita Applicata, Via Fratelli Cervi 201, I-20090 Segrate (MI) (Italy)

    2007-04-15

    The main effort of the present study is to analyze the availability and reliability of a high-performance linac (linear accelerator) conceived for Accelerator-Driven Systems (ADS) purpose and to suggest recommendations, in order both to meet the high operability goals and to satisfy the safety requirements dictated by the reactor system. Reliability Block Diagrams (RBD) approach has been considered for system modelling, according to the present level of definition of the design: component failure modes are assessed in terms of Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR), reliability and availability figures are derived, applying the current reliability algorithms. The lack of a well-established component database has been pointed out as the main issue related to the accelerator reliability assessment. The results, affected by the conservative character of the study, show a high margin for the improvement in terms of accelerator reliability and availability figures prediction. The paper outlines the viable path towards the accelerator reliability and availability enhancement process and delineates the most proper strategies. The improvement in the reliability characteristics along this path is shown as well.

  16. Reliability studies of a high-power proton accelerator for accelerator-driven system applications for nuclear waste transmutation

    International Nuclear Information System (INIS)

    Burgazzi, Luciano; Pierini, Paolo

    2007-01-01

    The main effort of the present study is to analyze the availability and reliability of a high-performance linac (linear accelerator) conceived for Accelerator-Driven Systems (ADS) purpose and to suggest recommendations, in order both to meet the high operability goals and to satisfy the safety requirements dictated by the reactor system. Reliability Block Diagrams (RBD) approach has been considered for system modelling, according to the present level of definition of the design: component failure modes are assessed in terms of Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR), reliability and availability figures are derived, applying the current reliability algorithms. The lack of a well-established component database has been pointed out as the main issue related to the accelerator reliability assessment. The results, affected by the conservative character of the study, show a high margin for the improvement in terms of accelerator reliability and availability figures prediction. The paper outlines the viable path towards the accelerator reliability and availability enhancement process and delineates the most proper strategies. The improvement in the reliability characteristics along this path is shown as well

  17. Management systems for high reliability organizations. Integration and effectiveness; Managementsysteme fuer Hochzuverlaessigkeitsorganisationen. Integration und Wirksamkeit

    Energy Technology Data Exchange (ETDEWEB)

    Mayer, Michael

    2015-03-09

    The scope of the thesis is the development of a method for improvement of efficient integrated management systems for high reliability organizations (HRO). A comprehensive analysis of severe accident prevention is performed. Severe accident management, mitigation measures and business continuity management are not included. High reliability organizations are complex and potentially dynamic organization forms that can be inherently dangerous like nuclear power plants, offshore platforms, chemical facilities, large ships or large aircrafts. A recursive generic management system model (RGM) was development based on the following factors: systemic and cybernetic Asepcts; integration of different management fields, high decision quality, integration of efficient methods of safety and risk analysis, integration of human reliability aspects, effectiveness evaluation and improvement.

  18. A novel high reliability CMOS SRAM cell

    Energy Technology Data Exchange (ETDEWEB)

    Xie Chengmin; Wang Zhongfang; Wu Longsheng; Liu Youbao, E-mail: hglnew@sina.com [Computer Research and Design Department, Xi' an Microelectronic Technique Institutes, Xi' an 710054 (China)

    2011-07-15

    A novel 8T single-event-upset (SEU) hardened and high static noise margin (SNM) SRAM cell is proposed. By adding one transistor paralleled with each access transistor, the drive capability of pull-up PMOS is greater than that of the conventional cell and the read access transistors are weaker than that of the conventional cell. So the hold, read SNM and critical charge increase greatly. The simulation results show that the critical charge is almost three times larger than that of the conventional 6T cell by appropriately sizing the pull-up transistors. The hold and read SNM of the new cell increase by 72% and 141.7%, respectively, compared to the 6T design, but it has a 54% area overhead and read performance penalty. According to these features, this novel cell suits high reliability applications, such as aerospace and military. (semiconductor integrated circuits)

  19. A novel high reliability CMOS SRAM cell

    International Nuclear Information System (INIS)

    Xie Chengmin; Wang Zhongfang; Wu Longsheng; Liu Youbao

    2011-01-01

    A novel 8T single-event-upset (SEU) hardened and high static noise margin (SNM) SRAM cell is proposed. By adding one transistor paralleled with each access transistor, the drive capability of pull-up PMOS is greater than that of the conventional cell and the read access transistors are weaker than that of the conventional cell. So the hold, read SNM and critical charge increase greatly. The simulation results show that the critical charge is almost three times larger than that of the conventional 6T cell by appropriately sizing the pull-up transistors. The hold and read SNM of the new cell increase by 72% and 141.7%, respectively, compared to the 6T design, but it has a 54% area overhead and read performance penalty. According to these features, this novel cell suits high reliability applications, such as aerospace and military. (semiconductor integrated circuits)

  20. ATLAS Jet Trigger Update for the LHC Run II

    CERN Document Server

    Prince, Sebastien; The ATLAS collaboration

    2015-01-01

    After the current shutdown, the LHC is about to resume operation for a new data-taking period, when it will operate with increased luminosity, event rate and centre of mass energy. The new conditions will impose more demanding constraints on the ATLAS online trigger reconstruction and selection system. To cope with such increased constraints, the ATLAS High Level Trigger, placed after a first hardware-based Level-1 trigger, has been redesigned by merging two previously separated software-based processing levels. In the new joint processing level, the algorithms run in the same computing nodes, thus sharing resources, minimizing the data transfer from the detector buffers and increasing the algorithm flexibility. The Jet trigger software selects events containing high transverse momentum hadronic jets. It needs optimal jet energy resolution to help rejecting an overwhelming background while retaining good efficiency for interesting jets. In particular, this requires the CPU-intensive reconstruction of tridimen...

  1. Design of piezoelectric transducer layer with electromagnetic shielding and high connection reliability

    Science.gov (United States)

    Qiu, Lei; Yuan, Shenfang; Shi, Xiaoling; Huang, Tianxiang

    2012-07-01

    Piezoelectric transducer (PZT) and Lamb wave based structural health monitoring (SHM) method have been widely studied for on-line SHM of high-performance structures. To monitor large-scale structures, a dense PZTs array is required. In order to improve the placement efficiency and reduce the wire burden of the PZTs array, the concept of the piezoelectric transducers layer (PSL) was proposed. The PSL consists of PZTs, a flexible interlayer with printed wires and signal input/output interface. For on-line SHM on real aircraft structures, there are two main issues on electromagnetic interference and connection reliability of the PSL. To address the issues, an electromagnetic shielding design method of the PSL to reduce spatial electromagnetic noise and crosstalk is proposed and a combined welding-cementation process based connection reliability design method is proposed to enhance the connection reliability between the PZTs and the flexible interlayer. Two experiments on electromagnetic interference suppression are performed to validate the shielding design of the PSL. The experimental results show that the amplitudes of the spatial electromagnetic noise and crosstalk output from the shielded PSL developed by this paper are - 15 dB and - 25 dB lower than those of the ordinary PSL, respectively. Other two experiments on temperature durability ( - 55 °C-80 °C ) and strength durability (160-1600μɛ, one million load cycles) are applied to the PSL to validate the connection reliability. The low repeatability errors (less than 3% and less than 5%, respectively) indicate that the developed PSL is of high connection reliability and long fatigue life.

  2. The ATLAS Trigger in Run-2 - Design, Menu and Performance

    CERN Document Server

    Vazquez Schroeder, Tamara; The ATLAS collaboration

    2017-01-01

    The ATLAS trigger has been used very successfully for online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a center-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger. Events are selected based on physics signatures such as presence of energetic leptons, photons, jets or large missing energy. The trigger system exploits topological information, as well as multi-variate methods to carry out the necessary physics filtering. In total, the ATLAS online selection consists of thousands of different individual triggers. Taken together constitute the trigger menu, which reflects the physics goals of the collaboration while taking into account available data taking resources. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and number of interactions per bunch crossing (pileup) which are the result of the...

  3. Feasibility studies of a Level-1 Tracking Trigger for ATLAS

    CERN Document Server

    Warren, M; Brenner, R; Konstantinidis, N; Sutton, M

    2009-01-01

    The existing ATLAS Level-1 trigger system is seriously challenged at the SLHC's higher luminosity. A hardware tracking trigger might be needed, but requires a detailed understanding of the detector. Simulation of high pile-up events, with various data-reduction techniques applied will be described. Two scenarios are envisaged: (a) regional readout - calorimeter and muon triggers are used to identify portions of the tracker; and (b) track-stub finding using special trigger layers. A proposed hardware system, including data reduction on the front-end ASICs, readout within a super-module and integrating regional triggering into all levels of the readout system, will be discussed.

  4. An Experimental Setup to Measure the Minimum Trigger Energy for Magneto-Thermal Instability in Nb$_{3}$Sn Strands

    CERN Document Server

    Takala, E; Bremer, J; Balle, C; Bottura, L; Rossi, L

    2012-01-01

    Magneto-thermal instability may affect high critical current density Nb$_{3}$Sn superconducting strands that can quench even though the transport current is low compared to the critical current with important implications in the design of next generation superconducting magnets. The instability is initiated by a small perturbation energy which is considerably lower than the Minimum Quench Energy (MQE). At CERN, a new experimental setup was developed to measure the smallest perturbation energy (Minimum Trigger Energy, MTE) which is able to trigger the magneto-thermal instability in superconducting Nb$_{3}$Sn-strands. The setup is based on Q-switched laser technology which is able to provide a localized perturbation in nano-second time scale. Using this technique the energy deposition into the strand is well defined and reliable. The laser is located outside the cryostat at room temperature. The beam is guided from room temperature on to the superconducting strand by using a UV-enhanced fused silica fibre. The ...

  5. Hadron correlation in jets on the near and away sides of high-pT triggers in heavy-ion collisions

    International Nuclear Information System (INIS)

    Hwa, Rudolph C.; Yang, C. B.

    2009-01-01

    The correlation between the trigger and associated particles in jets produced on near and away sides of high-p T triggers in heavy-ion collisions is studied. Hadronization of jets on both sides is treated by thermal-shower and shower-shower recombinations. The energy loss of semihard and hard partons traversing the nuclear medium is parametrized in a way that renders a good fit of the single-particle inclusive distributions at all centralities. The associated hadron distribution in the near-side jet can be determined showing weak dependence on system size because of trigger bias. The inverse slope increases with trigger momentum in agreement with data. The distribution of associated particles in the away-side jet is also studied with careful attention given to antitrigger bias that is due to the longer path length that the away-side jet recoiling against the trigger jet must propagate in the medium to reach the opposite side. Centrality dependence is taken into account after determining a realistic probability distribution of the dynamical path length of the parton trajectory within each class of centrality. For symmetric dijets with p T trig =p T assoc (away), it is shown that the per-trigger yield is dominated by tangential jets. For unequal p T trig , p T assoc (near) and p T assoc (away), the yields are calculated for various centralities, showing an intricate relationship among them. The near-side yield agrees with data both in centrality dependence and in p T assoc (near) distribution. The average parton momentum for the recoil jet is shown to be always larger than that of the trigger jet for fixed p T trig and centrality and for any measurable p T assoc (away). With the comprehensive treatment of dijet production described here, it is possible to answer many questions regarding the behavior of partons in the medium under conditions that can be specified on measurable hadron momenta.

  6. Technology Improvement for the High Reliability LM-2F Launch Vehicle

    Institute of Scientific and Technical Information of China (English)

    QIN Tong; RONG Yi; ZHENG Liwei; ZHANG Zhi

    2017-01-01

    The Long March 2F (LM-2F) launch vehicle,the only launch vehicle designed for manned space flight in China,successfully launched the Tiangong 2 space laboratory and the Shenzhou ll manned spaceship into orbits in 2016 respectively.In this study,it introduces the technological improvements for enhancing the reliability of the LM-2F launch vehicle in the aspects of general technology,control system,manufacture and ground support system.The LM2F launch vehicle will continue to provide more contributions to the Chinese Space Station Project with its high reliability and 100% success rate.

  7. Antarctic icequakes triggered by the 2010 Maule earthquake in Chile

    Science.gov (United States)

    Peng, Zhigang; Walter, Jacob I.; Aster, Richard C.; Nyblade, Andrew; Wiens, Douglas A.; Anandakrishnan, Sridhar

    2014-09-01

    Seismic waves from distant, large earthquakes can almost instantaneously trigger shallow micro-earthquakes and deep tectonic tremor as they pass through Earth's crust. Such remotely triggered seismic activity mostly occurs in tectonically active regions. Triggered seismicity is generally considered to reflect shear failure on critically stressed fault planes and is thought to be driven by dynamic stress perturbations from both Love and Rayleigh types of surface seismic wave. Here we analyse seismic data from Antarctica in the six hours leading up to and following the 2010 Mw 8.8 Maule earthquake in Chile. We identify many high-frequency seismic signals during the passage of the Rayleigh waves generated by the Maule earthquake, and interpret them as small icequakes triggered by the Rayleigh waves. The source locations of these triggered icequakes are difficult to determine owing to sparse seismic network coverage, but the triggered events generate surface waves, so are probably formed by near-surface sources. Our observations are consistent with tensile fracturing of near-surface ice or other brittle fracture events caused by changes in volumetric strain as the high-amplitude Rayleigh waves passed through. We conclude that cryospheric systems can be sensitive to large distant earthquakes.

  8. Toward reliable and repeatable automated STEM-EDS metrology with high throughput

    Science.gov (United States)

    Zhong, Zhenxin; Donald, Jason; Dutrow, Gavin; Roller, Justin; Ugurlu, Ozan; Verheijen, Martin; Bidiuk, Oleksii

    2018-03-01

    New materials and designs in complex 3D architectures in logic and memory devices have raised complexity in S/TEM metrology. In this paper, we report about a newly developed, automated, scanning transmission electron microscopy (STEM) based, energy dispersive X-ray spectroscopy (STEM-EDS) metrology method that addresses these challenges. Different methodologies toward repeatable and efficient, automated STEM-EDS metrology with high throughput are presented: we introduce the best known auto-EDS acquisition and quantification methods for robust and reliable metrology and present how electron exposure dose impacts the EDS metrology reproducibility, either due to poor signalto-noise ratio (SNR) at low dose or due to sample modifications at high dose conditions. Finally, we discuss the limitations of the STEM-EDS metrology technique and propose strategies to optimize the process both in terms of throughput and metrology reliability.

  9. A new fast and programmable trigger logic

    International Nuclear Information System (INIS)

    Fucci, A.; Amendolia, S.R.; Bertolucci, E.; Bottigli, U.; Bradaschia, C.; Foa, L.; Giazotto, A.; Giorgi, M.; Givoletti, M.; Lucardesi, P.; Menzione, A.; Passuello, D.; Quaglia, M.; Ristori, L.; Rolandi, L.; Salvadori, P.; Scribano, A.; Stanga, R.; Stefanini, A.; Vincelli, M.L.

    1977-01-01

    The NA1 (FRAMM) experiment, under construction for the CERN-SPS North Area, deals with more than 1000 counter signals which have to be combined together in order to build sophisticated and highly selective triggers. These requirements have led to the development of a low cost, combinatorial, fast electronics which can replace, in an advantageous way the standard NIM electronics at the trigger level. The essential performances of the basic circuit are: 1) programmability of any desired logical expression; 2) trigger time independent of the chosen expression; 3) reduced cost and compactness due to the use of commercial RAMs, PROMs, and PLAs; 4) short delay, less than 20 ns, between input and output pulses. (Auth.)

  10. FTK: a Fast Track Trigger for ATLAS

    International Nuclear Information System (INIS)

    Anderson, J; Auerbach, B; Blair, R; Andreani, A; Andreazza, A; Citterio, M; Annovi, A; Beretta, M; Castegnaro, A; Atkinson, M; Cavaliere, V; Chang, P; Bevacqua, V; Crescioli, F; Blazey, G; Bogdan, M; Boveia, A; Canelli, F; Cheng, Y; Cervigni, F

    2012-01-01

    We describe the design and expected performance of a the Fast Tracker Trigger (FTK) system for the ATLAS detector at the Large Hadron Collider. The FTK is a highly parallel hardware system designed to operate at the Level 1 trigger output rate. It is designed to provide global tracks reconstructed in the inner detector with resolution comparable to the full offline reconstruction as input of the Level 2 trigger processing. The hardware system is based on associative memories for pattern recognition and fast FPGAs for track reconstruction. The FTK is expected to dramatically improve the performance of track based isolation and b-tagging with little to no dependencies of pile-up interactions.

  11. ATLAS Tau Trigger

    CERN Document Server

    Belanger-Champagne, C; Bosman, M; Brenner, R; Casado, MP; Czyczula, Z; Dam, M; Demers, S; Farrington, S; Igonkina, O; Kalinowski, A; Kanaya, N; Osuna, C; Pérez, E; Ptacek, E; Reinsch, A; Saavedra, A; Sopczak, A; Strom, D; Torrence, E; Tsuno, S; Vorwerk, V; Watson, A; Xella, S

    2008-01-01

    Moving to the high energy scale of the LHC, the identification of tau leptons will become a necessary and very powerful tool, allowing a discovery of physics beyond Standard Model. Many models, among them light SM Higgs and various SUSY models, predict an abundant production of taus with respect to other leptons. The reconstruction of hadronic tau decays, although a very challenging task in hadronic enviroments, allows to increase a signal efficiency by at least of factor 2, and provides an independent control sample to disantangle lepton tau decays from prompt electrons and muons. Thanks to the advanced calorimetry and tracking, the ATLAS experiment has developed tools to efficiently identify hadronic taus at the trigger level. In this presentation we will review the characteristics of taus and the methods to suppress low-multiplicity, low-energy jets contributions as well as we will address the tau trigger chain which provide a rejection rate of 10^5. We will further present plans for commissioning the ATLA...

  12. Stay away from asthma triggers

    Science.gov (United States)

    Asthma triggers - stay away from; Asthma triggers - avoiding; Reactive airway disease - triggers; Bronchial asthma - triggers ... clothes. They should leave the coat outside or away from your child. Ask people who work at ...

  13. Review of trigger and on-line processors at SLAC

    International Nuclear Information System (INIS)

    Lankford, A.J.

    1984-07-01

    The role of trigger and on-line processors in reducing data rates to manageable proportions in e + e - physics experiments is defined not by high physics or background rates, but by the large event sizes of the general-purpose detectors employed. The rate of e + e - annihilation is low, and backgrounds are not high; yet the number of physics processes which can be studied is vast and varied. This paper begins by briefly describing the role of trigger processors in the e + e - context. The usual flow of the trigger decision process is illustrated with selected examples of SLAC trigger processing. The features are mentioned of triggering at the SLC and the trigger processing plans of the two SLC detectors: The Mark II and the SLD. The most common on-line processors at SLAC, the BADC, the SLAC Scanner Processor, the SLAC FASTBUS Controller, and the VAX CAMAC Channel, are discussed. Uses of the 168/E, 3081/E, and FASTBUS VAX processors are mentioned. The manner in which these processors are interfaced and the function they serve on line is described. Finally, the accelerator control system for the SLC is outlined. This paper is a survey in nature, and hence, relies heavily upon references to previous publications for detailed description of work mentioned here. 27 references, 9 figures, 1 table

  14. System principles, mathematical models and methods to ensure high reliability of safety systems

    Science.gov (United States)

    Zaslavskyi, V.

    2017-04-01

    Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.

  15. Electrocardiography-triggered high-resolution CT for reducing cardiac motion artifact. Evaluation of the extent of ground-glass attenuation in patients with idiopathic pulmonary fibrosis

    International Nuclear Information System (INIS)

    Nishiura, Motoko; Johkoh, Takeshi; Yamamoto, Shuji

    2007-01-01

    The aim of this study was to evaluate the decreasing of cardiac motion artifact and whether the extent of ground-glass attenuation of idiopathic pulmonary fibrosis (IPF) was accurately assessed by electrocardiography (ECG)-triggered high-resolution computed tomography (HRCT) by 0.5-s/rotation multidetector-row CT (MDCT). ECG-triggered HRCT were scanned at the end-diastolic phase by a MDCT scanner with the following scan parameters; axial four-slice mode, 0.5 mm collimation, 0.5-s/rotation, 120 kVp, 200 mA/rotation, high-frequency algorithm, and half reconstruction. In 42 patients with IPF, both conventional HRCT (ECG gating (-), full reconstruction) and ECG-triggered HRCT were performed at the same levels (10-mm intervals) with the above scan parameters. The correlation between percent diffusion of carbon monoxide of the lung (%DLCO) and the mean extent of ground-glass attenuation on both conventional HRCT and ECG-triggered HRCT was evaluated with the Spearman rank correlation coefficient test. The correlation between %DLCO and the mean extent of ground-glass attenuation on ECG-triggered HRCT (observer A: r=-0.790, P<0.0001; observer B: r=-0.710, P<0.0001) was superior to that on conventional HRCT (observer A: r=-0.395, P<0.05; observer B: r=-0.577, P=0.002) for both observers. ECG-triggered HRCT by 0.5 s/rotation MDCT can reduce the cardiac motion artifact and is useful for evaluating the extent of ground-glass attenuation of IPF. (author)

  16. Trigger and data-acquisition challenges at the LHC

    CERN Multimedia

    CERN. Geneva

    2003-01-01

    We review the main requirements placed on the Trigger and Data Acquisition (DAQ systems of the LHC experiments by their rich physics program and the LHC environment. A description of the architecture of the various systems, the motivation of each alternative and the conceptual design of each filtering stage will be discussed. We will then turn to a description of the major elements of the three distinct sub-systems, namely the Level-1 trigger, the DAQ with particular attention to the Event-Building and overall control and monitor, and finally the High-Level trigger system and the online farms.

  17. The TIGER trigger processor for the CAMERA detector at COMPASS-II

    Energy Technology Data Exchange (ETDEWEB)

    Baumann, Tobias; Buechele, Maximilian; Fischer, Horst; Gorzellik, Matthias; Grussenmeyer, Tobias; Herrmann, Florian; Joerg, Philipp; Kremser, Paul; Kunz, Tobias; Michalski, Christoph; Schopferer, Sebastian; Szameitat, Tobias [Physikalisches Institut der Universitaet Freiburg, Freiburg im Breisgau (Germany)

    2013-07-01

    In today's nuclear and high-energy physics experiments the background-induced occupancy of the detector channels can be quite high; therefore it is important to have sophisticated trigger subsystems which process the data in real-time to generate trigger objects for the global trigger decision. In this work we present a FPGA based low-latency trigger processor for the COMPASS-II experiment. TIGER is a high-performance trigger processor that was developed to fit perfectly in the GANDALF framework and extend its versatility. It is designed as a VXS module and is allocated to the central VXS switch slot, which has a direct link from every payload slot. The synchronous transfer protocol was optimized for low latencies and offers a bandwidth of up to 8 Gbit/s per link. The centerpiece of the board is a Xilinx Virtex-6 SX315T FPGA, offering vast programmable logic, embedded memory and DSP resources. It is accompanied by DDR3 memory, a COM Express CPU and a MXM GPU. Besides the VXS backplane ports, the board features two SFP+ transceivers, 32 LVDS inputs and 32 LVDS outputs to interface with the global trigger system and a Gigabit Ethernet port for configuration and monitoring.

  18. Design of piezoelectric transducer layer with electromagnetic shielding and high connection reliability

    International Nuclear Information System (INIS)

    Qiu, Lei; Yuan, Shenfang; Shi, Xiaoling; Huang, Tianxiang

    2012-01-01

    Piezoelectric transducer (PZT) and Lamb wave based structural health monitoring (SHM) method have been widely studied for on-line SHM of high-performance structures. To monitor large-scale structures, a dense PZTs array is required. In order to improve the placement efficiency and reduce the wire burden of the PZTs array, the concept of the piezoelectric transducers layer (PSL) was proposed. The PSL consists of PZTs, a flexible interlayer with printed wires and signal input/output interface. For on-line SHM on real aircraft structures, there are two main issues on electromagnetic interference and connection reliability of the PSL. To address the issues, an electromagnetic shielding design method of the PSL to reduce spatial electromagnetic noise and crosstalk is proposed and a combined welding–cementation process based connection reliability design method is proposed to enhance the connection reliability between the PZTs and the flexible interlayer. Two experiments on electromagnetic interference suppression are performed to validate the shielding design of the PSL. The experimental results show that the amplitudes of the spatial electromagnetic noise and crosstalk output from the shielded PSL developed by this paper are − 15 dB and − 25 dB lower than those of the ordinary PSL, respectively. Other two experiments on temperature durability ( − 55 °C–80 °C ) and strength durability (160–1600με, one million load cycles) are applied to the PSL to validate the connection reliability. The low repeatability errors (less than 3% and less than 5%, respectively) indicate that the developed PSL is of high connection reliability and long fatigue life. (paper)

  19. Quantitative reliability assessment for safety critical system software

    International Nuclear Information System (INIS)

    Chung, Dae Won; Kwon, Soon Man

    2005-01-01

    An essential issue in the replacement of the old analogue I and C to computer-based digital systems in nuclear power plants is the quantitative software reliability assessment. Software reliability models have been successfully applied to many industrial applications, but have the unfortunate drawback of requiring data from which one can formulate a model. Software which is developed for safety critical applications is frequently unable to produce such data for at least two reasons. First, the software is frequently one-of-a-kind, and second, it rarely fails. Safety critical software is normally expected to pass every unit test producing precious little failure data. The basic premise of the rare events approach is that well-tested software does not fail under normal routine and input signals, which means that failures must be triggered by unusual input data and computer states. The failure data found under the reasonable testing cases and testing time for these conditions should be considered for the quantitative reliability assessment. We will present the quantitative reliability assessment methodology of safety critical software for rare failure cases in this paper

  20. 76 FR 72203 - Voltage Coordination on High Voltage Grids; Notice of Reliability Workshop Agenda

    Science.gov (United States)

    2011-11-22

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. AD12-5-000] Voltage Coordination on High Voltage Grids; Notice of Reliability Workshop Agenda As announced in the Notice of Staff..., from 9 a.m. to 4:30 p.m. to explore the interaction between voltage control, reliability, and economic...

  1. High Reliability Prototype Quadrupole for the Next Linear Collider

    International Nuclear Information System (INIS)

    Spencer, Cherrill M

    2001-01-01

    The Next Linear Collider (NLC) will require over 5600 magnets, each of which must be highly reliable and/or quickly repairable in order that the NLC reach its 85% overall availability goal. A multidiscipline engineering team was assembled at SLAC to develop a more reliable electromagnet design than historically had been achieved at SLAC. This team carried out a Failure Mode and Effects Analysis (FMEA) on a standard SLAC quadrupole magnet system. They overcame a number of longstanding design prejudices, producing 10 major design changes. This paper describes how a prototype magnet was constructed and the extensive testing carried out on it to prove full functionality with an improvement in reliability. The magnet's fabrication cost will be compared to the cost of a magnet with the same requirements made in the historic SLAC way. The NLC will use over 1600 of these 12.7 mm bore quadrupoles with a range of integrated strengths from 0.6 to 132 Tesla, a maximum gradient of 135 Tesla per meter, an adjustment range of 0 to -20% and core lengths from 324 mm to 972 mm. The magnetic center must remain stable to within 1 micron during the 20% adjustment. A magnetic measurement set-up has been developed that can measure sub-micron shifts of a magnetic center. The prototype satisfied the center shift requirement over the full range of integrated strengths

  2. Triggered creep as a possible mechanism for delayed dynamic triggering of tremor and earthquakes

    Science.gov (United States)

    Shelly, David R.; Peng, Zhigang; Hill, David P.; Aiken, Chastity

    2011-01-01

    The passage of radiating seismic waves generates transient stresses in the Earth's crust that can trigger slip on faults far away from the original earthquake source. The triggered fault slip is detectable in the form of earthquakes and seismic tremor. However, the significance of these triggered events remains controversial, in part because they often occur with some delay, long after the triggering stress has passed. Here we scrutinize the location and timing of tremor on the San Andreas fault between 2001 and 2010 in relation to distant earthquakes. We observe tremor on the San Andreas fault that is initiated by passing seismic waves, yet migrates along the fault at a much slower velocity than the radiating seismic waves. We suggest that the migrating tremor records triggered slow slip of the San Andreas fault as a propagating creep event. We find that the triggered tremor and fault creep can be initiated by distant earthquakes as small as magnitude 5.4 and can persist for several days after the seismic waves have passed. Our observations of prolonged tremor activity provide a clear example of the delayed dynamic triggering of seismic events. Fault creep has been shown to trigger earthquakes, and we therefore suggest that the dynamic triggering of prolonged fault creep could provide a mechanism for the delayed triggering of earthquakes. ?? 2011 Macmillan Publishers Limited. All rights reserved.

  3. Global search of triggered non-volcanic tremor

    Science.gov (United States)

    Chao, Tzu-Kai Kevin

    Deep non-volcanic tremor is a newly discovered seismic phenomenon with low amplitude, long duration, and no clear P- and S-waves as compared with regular earthquake. Tremor has been observed at many major plate-boundary faults, providing new information about fault slip behaviors below the seismogenic zone. While tremor mostly occurs spontaneously (ambient tremor) or during episodic slow-slip events (SSEs), sometimes tremor can also be triggered during teleseismic waves of distance earthquakes, which is known as "triggered tremor". The primary focus of my Ph.D. work is to understand the physical mechanisms and necessary conditions of triggered tremor by systematic investigations in different tectonic regions. In the first chapter of my dissertation, I conduct a systematic survey of triggered tremor beneath the Central Range (CR) in Taiwan for 45 teleseismic earthquakes from 1998 to 2009 with Mw ≥ 7.5. Triggered tremors are visually identified as bursts of high-frequency (2-8 Hz), non-impulsive, and long-duration seismic energy that are coherent among many seismic stations and modulated by the teleseismic surface waves. A total of 9 teleseismic earthquakes has triggered clear tremor in Taiwan. The peak ground velocity (PGV) of teleseismic surface waves is the most important factor in determining tremor triggering potential, with an apparent threshold of ˜0.1 cm/s, or 7-8 kPa. However, such threshold is partially controlled by the background noise level, preventing triggered tremor with weaker amplitude from being observed. In addition, I find a positive correlation between the PGV and the triggered tremor amplitude, which is consistent with the prediction of the 'clock-advance' model. This suggests that triggered tremor can be considered as a sped-up occurrence of ambient tremor under fast loading from the passing surface waves. Finally, the incident angles of surface waves also play an important rule in controlling the tremor triggering potential. The next

  4. The NA27 trigger

    International Nuclear Information System (INIS)

    Bizzarri, R.; Di Capua, E.; Falciano, S.; Iori, M.; Marel, G.; Piredda, G.; Zanello, L.; Haupt, L.; Hellman, S.; Holmgren, S.O.; Johansson, K.E.

    1985-05-01

    We have designed and implemented a minimum bias trigger together with a fiducial volume trigger for the experiment NA27, performed at the CERN SPS. A total of more than 3 million bubble chamber pictures have been taken with a triggered cross section smaller than 75% of the total inelastic cross section. Events containing charm particles were triggered with an efficiency of 98 +2 sub(-3)%. With the fiducial volume trigger, the probability for a picture to contain an interaction in the visible hydrogen increased from 47.3% to 59.5%, reducing film cost and processing effort with about 20%. The improvement in data taking rate is shown to be negligible. (author)

  5. The ATLAS Trigger Algorithms for General Purpose Graphics Processor Units

    CERN Document Server

    Tavares Delgado, Ademar; The ATLAS collaboration

    2016-01-01

    The ATLAS Trigger Algorithms for General Purpose Graphics Processor Units Type: Talk Abstract: We present the ATLAS Trigger algorithms developed to exploit General­ Purpose Graphics Processor Units. ATLAS is a particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system has two levels, hardware-­based Level 1 and the High Level Trigger implemented in software running on a farm of commodity CPU. Performing the trigger event selection within the available farm resources presents a significant challenge that will increase future LHC upgrades. are being evaluated as a potential solution for trigger algorithms acceleration. Key factors determining the potential benefit of this new technology are the relative execution speedup, the number of GPUs required and the relative financial cost of the selected GPU. We have developed a trigger demonstrator which includes algorithms for reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Cal...

  6. High School Dropout in Proximal Context: The Triggering Role of Stressful Life Events.

    Science.gov (United States)

    Dupéré, Véronique; Dion, Eric; Leventhal, Tama; Archambault, Isabelle; Crosnoe, Robert; Janosz, Michel

    2018-03-01

    Adolescents who drop out of high school experience enduring negative consequences across many domains. Yet, the circumstances triggering their departure are poorly understood. This study examined the precipitating role of recent psychosocial stressors by comparing three groups of Canadian high school students (52% boys; M age  = 16.3 years; N = 545): recent dropouts, matched at-risk students who remain in school, and average students. Results indicate that in comparison with the two other groups, dropouts were over three times more likely to have experienced recent acute stressors rated as severe by independent coders. These stressors occurred across a variety of domains. Considering the circumstances in which youth decide to drop out has implications for future research and for policy and practice. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  7. Triggers of oral lichen planus flares and the potential role of trigger avoidance in disease management.

    Science.gov (United States)

    Chen, Hannah X; Blasiak, Rachel; Kim, Edwin; Padilla, Ricardo; Culton, Donna A

    2017-09-01

    Many patients with oral lichen planus (OLP) report triggers of flares, some of which overlap with triggers of other oral diseases, including oral allergy syndrome and oral contact dermatitis. The purpose of this study was to evaluate the prevalence of commonly reported triggers of OLP flares, their overlap with triggers of other oral diseases, and the potential role of trigger avoidance as a management strategy. Questionnaire-based survey of 51 patients with biopsy-proven lichen planus with oral involvement seen in an academic dermatology specialty clinic and/or oral pathology clinic between June 2014 and June 2015. Of the participants, 94% identified at least one trigger of their OLP flares. Approximately half of the participants (51%) reported at least one trigger that overlapped with known triggers of oral allergy syndrome, and 63% identified at least one trigger that overlapped with known triggers of oral contact dermatitis. Emotional stress was the most commonly reported trigger (77%). Regarding avoidance, 79% of the study participants reported avoiding their known triggers in daily life. Of those who actively avoided triggers, 89% reported an improvement in symptoms and 70% reported a decrease in the frequency of flares. Trigger identification and avoidance can play a potentially effective role in the management of OLP. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Overview and performance of the ATLAS Level-1 Topological Trigger

    CERN Document Server

    Damp, Johannes Frederic; The ATLAS collaboration

    2018-01-01

    In 2017 the LHC provided proton-proton collisions to the ATLAS experiment with high luminosity (up to 2.06x10^34), placing stringent operational and physical requirements on the ATLAS trigger system in order to reduce the 40 MHz collision rate to a manageable event storage rate of 1 kHz, while not rejecting interesting physics events. The Level-1 trigger is the first rate-reducing step in the ATLAS trigger system with an output rate of 100 kHz and decision latency of less than 2.5 μs. An important role is played by its newly commissioned component: the L1 topological trigger (L1Topo). This innovative system consists of two blades designed in AdvancedTCA form factor, mounting four individual state-of-the-art processors, and providing high input bandwidth and low latency data processing. Up to 128 topological trigger algorithms can be implemented to select interesting events by applying kinematic and angular requirements on electromagnetic clusters, jets, muons and total energy. This results in a significantly...

  9. The Run-2 ATLAS Trigger System: Design, Performance and Plan

    CERN Document Server

    zur Nedden, Martin; The ATLAS collaboration

    2016-01-01

    In high-energy physics experiments, online selection is crucial to select interesting collisions from the large data volume. The ATLAS experiment at the Large Hadron Collider (LHC) utilizes the trigger system that consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT), reducing the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of about 1000 Hz. The ATLAS trigger has been successfully collecting collision data during the first run of the LHC (Run-1) between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. In the second run of LHC (Run-2) starting from 2015, the LHC operates at centre-of-mass energy of 13 TeV and provides a higher luminosity of collisions. Also, the number of collisions occurring in a same bunch crossing increases. The ATLAS trigger system has to cope with these challenges, while maintaining or even improving the efficiency to select relevant physics processes. In this talk, first we will review the ATLAS trigger ...

  10. A criterion of the performance of thermometric systems of high metrological reliability

    International Nuclear Information System (INIS)

    Sal'nikov, N.L.; Filimonov, E.V.

    1995-01-01

    Monitoring temperature regimes is an important part of ensuring the operational safety of a nuclear power plant. Therefore, high standards are imposed upon the reliability of the primary information on the heat field of the object obtained from different sensors, and it is urgent to develop methods of evaluating the metrological reliability of these sensors. THe main sources of thermometric information at nuclear power plants are contact temperature sensors, the most widely used of these being thermoelectric converters (TEC) and thermal resistance converters (TRC)

  11. A hardware fast tracker for the ATLAS trigger

    Science.gov (United States)

    Asbah, Nedaa

    2016-09-01

    The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 1034 cm-2 s-1. After a successful period of data taking from 2010 to early 2013, the LHC already started with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project. It is a hardware processor that will provide, at every Level-1 accepted event (100 kHz) and within 100 microseconds, full tracking information for tracks with momentum as low as 1 GeV. Providing fast, extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondary vertices to ensure robust selections and improve the trigger performance. FTK exploits hardware technologies with massive parallelism, combining Associative Memory ASICs, FPGAs and high-speed communication links.

  12. Muon Trigger for Mobile Phones

    Science.gov (United States)

    Borisyak, M.; Usvyatsov, M.; Mulhearn, M.; Shimmin, C.; Ustyuzhanin, A.

    2017-10-01

    The CRAYFIS experiment proposes to use privately owned mobile phones as a ground detector array for Ultra High Energy Cosmic Rays. Upon interacting with Earth’s atmosphere, these events produce extensive particle showers which can be detected by cameras on mobile phones. A typical shower contains minimally-ionizing particles such as muons. As these particles interact with CMOS image sensors, they may leave tracks of faintly-activated pixels that are sometimes hard to distinguish from random detector noise. Triggers that rely on the presence of very bright pixels within an image frame are not efficient in this case. We present a trigger algorithm based on Convolutional Neural Networks which selects images containing such tracks and are evaluated in a lazy manner: the response of each successive layer is computed only if activation of the current layer satisfies a continuation criterion. Usage of neural networks increases the sensitivity considerably comparable with image thresholding, while the lazy evaluation allows for execution of the trigger under the limited computational power of mobile phones.

  13. ATLAS Level-1 Topological Trigger

    CERN Document Server

    Zheng, Daniel; The ATLAS collaboration

    2018-01-01

    The ATLAS experiment has introduced and recently commissioned a completely new hardware sub-system of its first-level trigger: the topological processor (L1Topo). L1Topo consist of two AdvancedTCA blades mounting state-of-the-art FPGA processors, providing high input bandwidth (up to 4 Gb/s) and low latency data processing (200 ns). L1Topo is able to select collision events by applying kinematic and topological requirements on candidate objects (energy clusters, jets, and muons) measured by calorimeters and muon sub-detectors. Results from data recorded using the L1Topo trigger will be presented. These results demonstrate a significantly improved background event rejection, thus allowing for a rate reduction without efficiency loss. This improvement has been shown for several physics processes leading to low-pT leptons, including H->tau tau and J/Psi->mu mu. In addition to describing the L1Topo trigger system, we will discuss the use of an accurate L1Topo simulation as a powerful tool to validate and optimize...

  14. LHCb detector and trigger performance in Run II

    Science.gov (United States)

    Francesca, Dordei

    2017-12-01

    The LHCb detector is a forward spectrometer at the LHC, designed to perform high precision studies of b- and c- hadrons. In Run II of the LHC, a new scheme for the software trigger at LHCb allows splitting the triggering of events into two stages, giving room to perform the alignment and calibration in real time. In the novel detector alignment and calibration strategy for Run II, data collected at the start of the fill are processed in a few minutes and used to update the alignment, while the calibration constants are evaluated for each run. This allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. The larger timing budget, available in the trigger, allows to perform the same track reconstruction online and offline. This enables LHCb to achieve the best reconstruction performance already in the trigger, and allows physics analyses to be performed directly on the data produced by the trigger reconstruction. The novel real-time processing strategy at LHCb is discussed from both the technical and operational point of view. The overall performance of the LHCb detector on the data of Run II is presented as well.

  15. Schmitt-Trigger-based Recycling Sensor and Robust and High-Quality PUFs for Counterfeit IC Detection

    OpenAIRE

    Lin, Cheng-Wei; Jang, Jae-Won; Ghosh, Swaroop

    2015-01-01

    We propose Schmitt-Trigger (ST) based recycling sensor that are tailored to amplify the aging mechanisms and detect fine grained recycling (minutes to seconds). We exploit the susceptibility of ST to process variations to realize high-quality arbiter PUF. Conventional SRAM PUF suffer from environmental fluctuation-induced bit flipping. We propose 8T SRAM PUF with a back-to-back PMOS latch to improve robustness by 4X. We also propose a low-power 7T SRAM with embedded Magnetic Tunnel Junction (...

  16. Level-1 trigger rate from beam halo muons in the end-cap

    CERN Document Server

    Robins, S

    1998-01-01

    Previous detectors at $p$-$\\bar{p}$ machines have experienced problems with high muon trigger rates in the forward region due to muons produced in interactions between the beam and the machine. The se `beam halo' muons typically have a very small angle to the beam direction, and are dominated by muons of several GeV energy and at low radius relative to the beam line. The response of the ATLA S end-cap muon trigger to them has been investigated using a complete simulation of both the LHC machine components and the ATLAS detector and trigger. It is seen that the total flux of such muon s in the end-cap trigger counters is $\\sim$ 60 kHz, in high luminosity LHC running, and the acceptance of the Level-1 end-cap muon trigger to these particles is $\\sim$1\\%. The overall Level-1 trig ger rate from such muons will be small compared to rates from the products of the $p$-$p$ collision. The total rates from low- and high-\\pt triggers at 6 and 20 GeV are 250 and 16 Hz respectively. Whilst these rates are negligible in co...

  17. Minimum Bias Trigger in ATLAS

    International Nuclear Information System (INIS)

    Kwee, Regina

    2010-01-01

    Since the restart of the LHC in November 2009, ATLAS has collected inelastic pp collisions to perform first measurements on charged particle densities. These measurements will help to constrain various models describing phenomenologically soft parton interactions. Understanding the trigger efficiencies for different event types are therefore crucial to minimize any possible bias in the event selection. ATLAS uses two main minimum bias triggers, featuring complementary detector components and trigger levels. While a hardware based first trigger level situated in the forward regions with 2.2 < |η| < 3.8 has been proven to select pp-collisions very efficiently, the Inner Detector based minimum bias trigger uses a random seed on filled bunches and central tracking detectors for the event selection. Both triggers were essential for the analysis of kinematic spectra of charged particles. Their performance and trigger efficiency measurements as well as studies on possible bias sources will be presented. We also highlight the advantage of these triggers for particle correlation analyses. (author)

  18. Causality and headache triggers

    Science.gov (United States)

    Turner, Dana P.; Smitherman, Todd A.; Martin, Vincent T.; Penzien, Donald B.; Houle, Timothy T.

    2013-01-01

    Objective The objective of this study was to explore the conditions necessary to assign causal status to headache triggers. Background The term “headache trigger” is commonly used to label any stimulus that is assumed to cause headaches. However, the assumptions required for determining if a given stimulus in fact has a causal-type relationship in eliciting headaches have not been explicated. Methods A synthesis and application of Rubin’s Causal Model is applied to the context of headache causes. From this application the conditions necessary to infer that one event (trigger) causes another (headache) are outlined using basic assumptions and examples from relevant literature. Results Although many conditions must be satisfied for a causal attribution, three basic assumptions are identified for determining causality in headache triggers: 1) constancy of the sufferer; 2) constancy of the trigger effect; and 3) constancy of the trigger presentation. A valid evaluation of a potential trigger’s effect can only be undertaken once these three basic assumptions are satisfied during formal or informal studies of headache triggers. Conclusions Evaluating these assumptions is extremely difficult or infeasible in clinical practice, and satisfying them during natural experimentation is unlikely. Researchers, practitioners, and headache sufferers are encouraged to avoid natural experimentation to determine the causal effects of headache triggers. Instead, formal experimental designs or retrospective diary studies using advanced statistical modeling techniques provide the best approaches to satisfy the required assumptions and inform causal statements about headache triggers. PMID:23534872

  19. Upgrades to the ATLAS trigger system   

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00221618; The ATLAS collaboration

    2017-01-01

    In coming years the LHC is expected to undergo upgrades to increase both the energy of proton-proton collisions and the instantaneous luminosity. In order to cope with these more challenging LHC conditions, upgrades of the ATLAS trigger system will be required. This talk will focus on some of the key aspects of these upgrades. Firstly, the upgrade period between 2019-2021 will see an increase in instantaneous luminosity to $3\\times10^{34} \\rm{cm^{-2}s^{-1}}$. Upgrades to the Level 1 trigger system during this time will include improvements for both the muon and calorimeter triggers. These include the upgrade of the first-level Endcap Muon trigger, the calorimeter trigger electronics and the addition of new calorimeter feature extractor hardware, such as the Global Feature Extractor (gFEX). An overview will be given on the design and development status the aforementioned systems, along with the latest testing and validation results. By 2026, the High Luminosity LHC will be able to deliver 14 TeV collisions wit...

  20. The D0 calorimeter trigger

    International Nuclear Information System (INIS)

    Guida, J.

    1992-12-01

    The D0 calorimeter trigger system consists of many levels to make physics motivated trigger decisions. The Level-1 trigger uses hardware techniques to reduce the trigger rate from ∼ 100kHz to 200Hz. It forms sums of electromagnetic and hadronic energy, globally and in towers, along with finding the missing transverse energy. A minimum energy is set on these energy sums to pass the event. The Level-2 trigger is a set of software filters, operating in a parallel-processing microvax farm which further reduces the trigger rate to a few Hertz. These filters will reject events which lack electron candidates, jet candidates, or missing transverse energy in the event. The performance of these triggers during the early running of the D0 detector will also be discussed