WorldWideScience

Sample records for highly reliable trigger

  1. Reliable on-line storage in the ALICE High-Level Trigger

    Energy Technology Data Exchange (ETDEWEB)

    Kalcher, Sebastian; Lindenstruth, Volker [Kirchhoff Institute of Physics, University of Heidelberg (Germany)

    2009-07-01

    The on-line disk capacity within large computing clusters such as used in the ALICE High-Level Trigger (HLT) is often not used due to the inherent unreliability of the involved disks. With currently available hard drive capacities the total on-line capacity can be significant when compared to the storage requirements of present high energy physics experiments. In this talk we report on ClusterRAID, a reliable, distributed mass storage system, which allows to harness the (often unused) disk capacities of large cluster installations. The key paradigm of this system is to transform the local hard drive into a reliable device. It provides adjustable fault-tolerance by utilizing sophisticated error-correcting codes. To reduce the costs of coding and decoding operations the use of modern graphics processing units as co-processor has been investigated. Also, the utilization of low overhead, high performance communication networks has been examined. A prototype set up of the system exists within the HLT with 90 TB gross capacity.

  2. Efficient, reliable and fast high-level triggering using a bonsai boosted decision tree

    International Nuclear Information System (INIS)

    Gligorov, V V; Williams, M

    2013-01-01

    High-level triggering is a vital component of many modern particle physics experiments. This paper describes a modification to the standard boosted decision tree (BDT) classifier, the so-called bonsai BDT, that has the following important properties: it is more efficient than traditional cut-based approaches; it is robust against detector instabilities, and it is very fast. Thus, it is fit-for-purpose for the online running conditions faced by any large-scale data acquisition system.

  3. Reliability model analysis and primary experimental evaluation of laser triggered pulse trigger

    International Nuclear Information System (INIS)

    Chen Debiao; Yang Xinglin; Li Yuan; Li Jin

    2012-01-01

    High performance pulse trigger can enhance performance and stability of the PPS. It is necessary to evaluate the reliability of the LTGS pulse trigger, so we establish the reliability analysis model of this pulse trigger based on CARMES software, the reliability evaluation is accord with the statistical results. (authors)

  4. Study on application of a high-speed trigger-type SFCL (TSFCL) for interconnection of power systems with different reliabilities

    International Nuclear Information System (INIS)

    Kim, Hye Ji; Yoon, Yong Tae

    2016-01-01

    Highlights: • Application of TSFCL to interconnect systems with different reliabilities is proposed. • TSFCL protects a grid by preventing detrimental effects from being delivered through the interconnection line. • A high-speed TSFCL with high impedance for transmission systems is required to be developed. - Abstract: Interconnection of power systems is one effective way to improve power supply reliability. However, differences in the reliability of each power system create a greater obstacle for the stable interconnection of power systems, as after interconnection a high-reliability system is affected by frequent faults in low reliability side systems. Several power system interconnection methods, such as the back-to-back method and the installation of either transformers or series reactors, have been investigated to counteract the damage caused by faults in the other neighboring systems. However, these methods are uneconomical and require complex operational management plans. In this work, a high-speed trigger-type superconducting fault current limiter (TSFCL) with large-impedance is proposed as a solution to maintain reliability and power quality when a high reliability power system is interconnected with a low reliability power system. Through analysis of the reliability index for the numerical examples obtained from a PSCAD/EMTDC simulator, a high-speed TSFCL with a large-impedance is confirmed to be effective for the interconnection between power systems with different reliabilities.

  5. ALICE High Level Trigger

    CERN Multimedia

    Alt, T

    2013-01-01

    The ALICE High Level Trigger (HLT) is a computing farm designed and build for the real-time, online processing of the raw data produced by the ALICE detectors. Events are fully reconstructed from the raw data, analyzed and compressed. The analysis summary together with the compressed data and a trigger decision is sent to the DAQ. In addition the reconstruction of the events allows for on-line monitoring of physical observables and this information is provided to the Data Quality Monitor (DQM). The HLT can process event rates of up to 2 kHz for proton-proton and 200 Hz for Pb-Pb central collisions.

  6. TRIGGER

    CERN Multimedia

    W. Smith

    2012-01-01

      Level-1 Trigger The Level-1 Trigger group is ready to deploy improvements to the L1 Trigger algorithms for 2012. These include new high-PT patterns for the RPC endcap, an improved CSC PT assignment, a new PT-matching algorithm for the Global Muon Trigger, and new calibrations for ECAL, HCAL, and the Regional Calorimeter Trigger. These should improve the efficiency, rate, and stability of the L1 Trigger. The L1 Trigger group also is migrating the online systems to SLC5. To make the data transfer from the Global Calorimeter Trigger to the Global Trigger more reliable and also to allow checking the data integrity online, a new optical link system has been developed by the GCT and GT groups and successfully tested at the CMS electronics integration facility in building 904. This new system is now undergoing further tests at Point 5 before being deployed for data-taking this year. New L1 trigger menus have recently been studied and proposed by Emmanuelle Perez and the L1 Detector Performance Group...

  7. Reliability of physical examination for diagnosis of myofascial trigger points: a systematic review of the literature.

    Science.gov (United States)

    Lucas, Nicholas; Macaskill, Petra; Irwig, Les; Moran, Robert; Bogduk, Nikolai

    2009-01-01

    Trigger points are promoted as an important cause of musculoskeletal pain. There is no accepted reference standard for the diagnosis of trigger points, and data on the reliability of physical examination for trigger points are conflicting. To systematically review the literature on the reliability of physical examination for the diagnosis of trigger points. MEDLINE, EMBASE, and other sources were searched for articles reporting the reliability of physical examination for trigger points. Included studies were evaluated for their quality and applicability, and reliability estimates were extracted and reported. Nine studies were eligible for inclusion. None satisfied all quality and applicability criteria. No study specifically reported reliability for the identification of the location of active trigger points in the muscles of symptomatic participants. Reliability estimates varied widely for each diagnostic sign, for each muscle, and across each study. Reliability estimates were generally higher for subjective signs such as tenderness (kappa range, 0.22-1.0) and pain reproduction (kappa range, 0.57-1.00), and lower for objective signs such as the taut band (kappa range, -0.08-0.75) and local twitch response (kappa range, -0.05-0.57). No study to date has reported the reliability of trigger point diagnosis according to the currently proposed criteria. On the basis of the limited number of studies available, and significant problems with their design, reporting, statistical integrity, and clinical applicability, physical examination cannot currently be recommended as a reliable test for the diagnosis of trigger points. The reliability of trigger point diagnosis needs to be further investigated with studies of high quality that use current diagnostic criteria in clinically relevant patients.

  8. Triggering at high luminosity: fake triggers from pile-up

    International Nuclear Information System (INIS)

    Johnson, R.

    1983-01-01

    Triggers based on a cut in transverse momentum (p/sub t/) have proved to be useful in high energy physics both because they indicte that a hard constituent scattering has occurred and because they can be made quickly enough to gate electronics. These triggers will continue to be useful at high luminosities if overlapping events do not cause an excessive number of fake triggers. In this paper, I determine if this is indeed a problem at high luminosity machines

  9. TRIGGER

    CERN Multimedia

    R. Arcidiacono

    2013-01-01

      In 2013 the Trigger Studies Group (TSG) has been restructured in three sub-groups: STEAM, for the development of new HLT menus and monitoring their performance; STORM, for the development of HLT tools, code and actual configurations; and FOG, responsible for the online operations of the High Level Trigger. The Strategy for Trigger Evolution And Monitoring (STEAM) group is responsible for Trigger Menu development, path timing, trigger performance studies coordination, HLT offline DQM as well as HLT release, menu and conditions validation – in collaboration and with the technical support of the PdmV group. Since the end of proton-proton data taking, the group has started preparing for 2015 data taking, with collisions at 13 TeV and 25 ns bunch spacing. The reliability of the extrapolation to higher energy is being evaluated comparing the trigger rates on 7 and 8 TeV Monte Carlo samples with the data taken in the past two years. The effect of 25 ns bunch spacing is being studied on the d...

  10. Triggers for a high sensitivity charm experiment

    International Nuclear Information System (INIS)

    Christian, D.C.

    1994-07-01

    Any future charm experiment clearly should implement an E T trigger and a μ trigger. In order to reach the 10 8 reconstructed charm level for hadronic final states, a high quality vertex trigger will almost certainly also be necessary. The best hope for the development of an offline quality vertex trigger lies in further development of the ideas of data-driven processing pioneered by the Nevis/U. Mass. group

  11. TRIGGER

    CERN Multimedia

    Roberta Arcidiacono

    2013-01-01

    Trigger Studies Group (TSG) The Trigger Studies Group has just concluded its third 2013 workshop, where all POGs presented the improvements to the physics object reconstruction, and all PAGs have shown their plans for Trigger development aimed at the 2015 High Level Trigger (HLT) menu. The Strategy for Trigger Evolution And Monitoring (STEAM) group is responsible for Trigger menu development, path timing, Trigger performance studies coordination, HLT offline DQM as well as HLT release, menu and conditions validation – this last task in collaboration with PdmV (Physics Data and Monte Carlo Validation group). In the last months the group has delivered several HLT rate estimates and comparisons, using the available data and Monte Carlo samples. The studies were presented at the Trigger workshops in September and December, and STEAM has contacted POGs and PAGs to understand the origin of the discrepancies observed between 8 TeV data and Monte Carlo simulations. The most recent results show what the...

  12. TRIGGER

    CERN Multimedia

    W. Smith

    Level-1 Trigger Hardware and Software The trigger system has been constantly in use in cosmic and commissioning data taking periods. During CRAFT running it delivered 300 million muon and calorimeter triggers to CMS. It has performed stably and reliably. During the abort gaps it has also provided laser and other calibration triggers. Timing issues, namely synchronization and latency issues, have been solved. About half of the Trigger Concentrator Cards for the ECAL Endcap (TCC-EE) are installed, and the firmware is being worked on. The production of the other half has started. The HCAL Trigger and Readout (HTR) card firmware has been updated, and new features such as fast parallel zero-suppression have been included. Repairs of drift tube (DT) trigger mini-crates, optical links and receivers of sector collectors are under way and have been completed on YB0. New firmware for the optical receivers of the theta links to the drift tube track finder is being installed. In parallel, tests with new eta track finde...

  13. TRIGGER

    CERN Multimedia

    by Wesley Smith

    2010-01-01

    Level-1 Trigger Hardware and Software The overall status of the L1 trigger has been excellent and the running efficiency has been high during physics fills. The timing is good to about 1%. The fine-tuning of the time synchronization of muon triggers is ongoing and will be completed after more than 10 nb-1 of data have been recorded. The CSC trigger primitive and RPC trigger timing have been refined. A new configuration for the CSC Track Finder featured modified beam halo cuts and improved ghost cancellation logic. More direct control was provided for the DT opto-receivers. New RPC Cosmic Trigger (RBC/TTU) trigger algorithms were enabled for collision runs. There is further work planned during the next technical stop to investigate a few of the links from the ECAL to the Regional Calorimeter Trigger (RCT). New firmware and a new configuration to handle trigger rate spikes in the ECAL barrel are also being tested. A board newly developed by the tracker group (ReTRI) has been installed and activated to block re...

  14. Inter- and Intraexaminer Reliability in Identifying and Classifying Myofascial Trigger Points in Shoulder Muscles.

    Science.gov (United States)

    Nascimento, José Diego Sales do; Alburquerque-Sendín, Francisco; Vigolvino, Lorena Passos; Oliveira, Wandemberg Fortunato de; Sousa, Catarina de Oliveira

    2018-01-01

    To determine inter- and intraexaminer reliability of examiners without clinical experience in identifying and classifying myofascial trigger points (MTPs) in the shoulder muscles of subjects asymptomatic and symptomatic for unilateral subacromial impact syndrome (SIS). Within-day inter- and intraexaminer reliability study. Physical therapy department of a university. Fifty-two subjects participated in the study, 26 symptomatic and 26 asymptomatic for unilateral SIS. Two examiners, without experience for assessing MTPs, independent and blind to the clinical conditions of the subjects, assessed bilaterally the presence of MTPs (present or absent) in 6 shoulder muscles and classified them (latent or active) on the affected side of the symptomatic group. Each examiner performed the same assessment twice in the same day. Reliability was calculated through percentage agreement, prevalence- and bias-adjusted kappa (PABAK) statistics, and weighted kappa. Intraexaminer reliability in identifying MTPs for the symptomatic and asymptomatic groups was moderate to perfect (PABAK, .46-1 and .60-1, respectively). Interexaminer reliability was between moderate and almost perfect in the 2 groups (PABAK, .46-.92), except for the muscles of the symptomatic group, which were below these values. With respect to MTP classification, intraexaminer reliability was moderate to high for most muscles, but interexaminer reliability was moderate for only 1 muscle (weighted κ=.45), and between weak and reasonable for the rest (weighted κ=.06-.31). Intraexaminer reliability is acceptable in clinical practice to identify and classify MTPs. However, interexaminer reliability proved to be reliable only to identify MTPs, with the symptomatic side exhibiting lower values of reliability. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  15. TRIGGER

    CERN Multimedia

    W. Smith from contributions of C. Leonidopoulos

    2010-01-01

    Level-1 Trigger Hardware and Software Since nearly all of the Level-1 (L1) Trigger hardware at Point 5 has been commissioned, activities during the past months focused on the fine-tuning of synchronization, particularly for the ECAL and the CSC systems, on firmware upgrades and on improving trigger operation and monitoring. Periodic resynchronizations or hard resets and a shortened luminosity section interval of 23 seconds were implemented. For the DT sector collectors, an automatic power-off was installed in case of high temperatures, and the monitoring capabilities of the opto-receivers and the mini-crates were enhanced. The DTTF and the CSCTF now have improved memory lookup tables. The HCAL trigger primitive logic implemented a new algorithm providing better stability of the energy measurement in the presence of any phase misalignment. For the Global Calorimeter Trigger, additional Source Cards have been manufactured and tested. Testing of the new tau, missing ET and missing HT algorithms is underw...

  16. TRIGGER

    CERN Multimedia

    R. Carlin with contributions from D. Acosta

    2012-01-01

    Level-1 Trigger Data-taking continues at cruising speed, with high availability of all components of the Level-1 trigger. We have operated the trigger up to a luminosity of 7.6E33, where we approached 100 kHz using the 7E33 prescale column.  Recently, the pause without triggers in case of an automatic "RESYNC" signal (the "settle" and "recover" time) was reduced in order to minimise the overall dead-time. This may become very important when the LHC comes back with higher energy and luminosity after LS1. We are also preparing for data-taking in the proton-lead run in early 2013. The CASTOR detector will make its comeback into CMS and triggering capabilities are being prepared for this. Steps to be taken include improved cooperation with the TOTEM trigger system and using the LHC clock during the injection and ramp phases of LHC. Studies are being finalised that will have a bearing on the Trigger Technical Design Report (TDR), which is to be rea...

  17. A high-voltage triggered pseudospark discharge experiment

    International Nuclear Information System (INIS)

    Ramaswamy, K.; Destler, W.W.; Rodgers, J.

    1996-01-01

    The design and execution of a pulsed high-voltage (350 endash 400 keV) triggered pseudospark discharge experiment is reported. Experimental studies were carried out to obtain an optimal design for stable and reliable pseudospark operation in a high-voltage regime (approx-gt 350 kV). Experiments were performed to determine the most suitable fill gas for electron-beam formation. The pseudospark discharge is initiated by a trigger mechanism involving a flashover between the trigger electrode and hollow cathode housing. Experimental results characterizing the electron-beam energy using the range-energy method are reported. Source size imaging was carried out using an x-ray pinhole camera and a novel technique using Mylar as a witness plate. It was experimentally determined that strong pinching occurred later in time and was associated with the lower-energy electrons. copyright 1996 American Institute of Physics

  18. TRIGGER

    CERN Multimedia

    W. Smith, from contributions of D. Acosta

    2012-01-01

      The L1 Trigger group deployed several major improvements this year. Compared to 2011, the single-muon trigger rate has been reduced by a factor of 2 and the η coverage has been restored to 2.4, with high efficiency. During the current technical stop, a higher jet seed threshold will be applied in the Global Calorimeter Trigger in order to significantly reduce the strong pile-up dependence of the HT and multi-jet triggers. The currently deployed L1 menu, with the “6E33” prescales, has a total rate of less than 100 kHz and operates with detector readout dead time of less than 3% for luminosities up to 6.5 × 1033 cm–2s–1. Further prescale sets have been created for 7 and 8 × 1033 cm–2s–1 luminosities. The L1 DPG is evaluating the performance of the Trigger for upcoming conferences and publication. Progress on the Trigger upgrade was reviewed during the May Upgrade Week. We are investigating scenarios for stagin...

  19. Tracking at High Level Trigger in CMS

    CERN Document Server

    Tosi, Mia

    2016-01-01

    The trigger systems of the LHC detectors play a crucial role in determining the physics capabili- ties of the experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a stream- lined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable out- put rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and ...

  20. The CMS High-Level Trigger

    International Nuclear Information System (INIS)

    Covarelli, R.

    2009-01-01

    At the startup of the LHC, the CMS data acquisition is expected to be able to sustain an event readout rate of up to 100 kHz from the Level-1 trigger. These events will be read into a large processor farm which will run the 'High-Level Trigger'(HLT) selection algorithms and will output a rate of about 150 Hz for permanent data storage. In this report HLT performances are shown for selections based on muons, electrons, photons, jets, missing transverse energy, τ leptons and b quarks: expected efficiencies, background rates and CPU time consumption are reported as well as relaxation criteria foreseen for a LHC startup instantaneous luminosity.

  1. The CMS High-Level Trigger

    CERN Document Server

    Covarelli, Roberto

    2009-01-01

    At the startup of the LHC, the CMS data acquisition is expected to be able to sustain an event readout rate of up to 100 kHz from the Level-1 trigger. These events will be read into a large processor farm which will run the "High-Level Trigger" (HLT) selection algorithms and will output a rate of about 150 Hz for permanent data storage. In this report HLT performances are shown for selections based on muons, electrons, photons, jets, missing transverse energy, tau leptons and b quarks: expected efficiencies, background rates and CPU time consumption are reported as well as relaxation criteria foreseen for a LHC startup instantaneous luminosity.

  2. The CMS High-Level Trigger

    Science.gov (United States)

    Covarelli, R.

    2009-12-01

    At the startup of the LHC, the CMS data acquisition is expected to be able to sustain an event readout rate of up to 100 kHz from the Level-1 trigger. These events will be read into a large processor farm which will run the "High-Level Trigger" (HLT) selection algorithms and will output a rate of about 150 Hz for permanent data storage. In this report HLT performances are shown for selections based on muons, electrons, photons, jets, missing transverse energy, τ leptons and b quarks: expected efficiencies, background rates and CPU time consumption are reported as well as relaxation criteria foreseen for a LHC startup instantaneous luminosity.

  3. TRIGGER

    CERN Multimedia

    W. Smith

    2011-01-01

    Level-1 Trigger Hardware and Software Overall the L1 trigger hardware has been running very smoothly during the last months of proton running. Modifications for the heavy-ion run have been made where necessary. The maximal design rate of 100 kHz can be sustained without problems. All L1 latencies have been rechecked. The recently installed Forward Scintillating Counters (FSC) are being used in the heavy ion run. The ZDC scintillators have been dismantled, but the calorimeter itself remains. We now send the L1 accept signal and other control signals to TOTEM. Trigger cables from TOTEM to CMS will be installed during the Christmas shutdown, so that the TOTEM data can be fully integrated within the CMS readout. New beam gas triggers have been developed, since the BSC-based trigger is no longer usable at high luminosities. In particular, a special BPTX signal is used after a quiet period with no collisions. There is an ongoing campaign to provide enough spare modules for the different subsystems. For example...

  4. TRIGGER

    CERN Multimedia

    J. Alimena

    2013-01-01

    Trigger Strategy Group The Strategy for Trigger Evolution And Monitoring (STEAM) group is responsible for the development of future High-Level Trigger menus, as well as of its DQM and validation, in collaboration and with the technical support of the PdmV group. Taking into account the beam energy and luminosity expected in 2015, a rough estimate of the trigger rates indicates a factor four increase with respect to 2012 conditions. Assuming that a factor two can be tolerated thanks to the increase in offline storage and processing capabilities, a toy menu has been developed using the new OpenHLT workflow to estimate the transverse energy/momentum thresholds that would halve the current trigger rates. The CPU time needed to run the HLT has been compared between data taken with 25 ns and 50 ns bunch spacing, for equivalent pile-up: no significant difference was observed on the global time per event distribution at the only available data point, corresponding to a pile-up of about 10 interactions. Using th...

  5. High-voltage high-current triggering vacuum switch

    International Nuclear Information System (INIS)

    Alferov, D.F.; Bunin, R.A.; Evsin, D.V.; Sidorov, V.A.

    2012-01-01

    Experimental investigations of switching and breaking capacities of the new high current triggered vacuum switch (TVS) are carried out at various parameters of discharge current. It has been shown that the high current triggered vacuum switch TVS can switch repeatedly a current from units up to ten kiloampers with duration up to ten millisecond [ru

  6. Performance of the CMS High Level Trigger

    CERN Document Server

    Perrotta, Andrea

    2015-01-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increases in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. The increase in the number of interactions per bunch crossing, on average 25 in 2012, and expected to be around 40 in Run II, will be an additional complication. We present here the expected performance of the main triggers that will be used during the 2015 data taking campaign, paying particular attention to the new approaches that have been developed to cope with the challenges of the new run. This includes improvements in HLT electron and photon reconstruction as well as better performing muon triggers. We will also present the performance of the improved trac...

  7. The ALICE Dimuon Spectrometer High Level Trigger

    CERN Document Server

    Becker, B; Cicalo, Corrado; Das, Indranil; de Vaux, Gareth; Fearick, Roger; Lindenstruth, Volker; Marras, Davide; Sanyal, Abhijit; Siddhanta, Sabyasachi; Staley, Florent; Steinbeck, Timm; Szostak, Artur; Usai, Gianluca; Vilakazi, Zeblon

    2009-01-01

    The ALICE Dimuon Spectrometer High Level Trigger (dHLT) is an on-line processing stage whose primary function is to select interesting events that contain distinct physics signals from heavy resonance decays such as J/psi and Gamma particles, amidst unwanted background events. It forms part of the High Level Trigger of the ALICE experiment, whose goal is to reduce the large data rate of about 25 GB/s from the ALICE detectors by an order of magnitude, without loosing interesting physics events. The dHLT has been implemented as a software trigger within a high performance and fault tolerant data transportation framework, which is run on a large cluster of commodity compute nodes. To reach the required processing speeds, the system is built as a concurrent system with a hierarchy of processing steps. The main algorithms perform partial event reconstruction, starting with hit reconstruction on the level of the raw data received from the spectrometer. Then a tracking algorithm finds track candidates from the recon...

  8. CMS High Level Trigger Timing Measurements

    International Nuclear Information System (INIS)

    Richardson, Clint

    2015-01-01

    The two-level trigger system employed by CMS consists of the Level 1 (L1) Trigger, which is implemented using custom-built electronics, and the High Level Trigger (HLT), a farm of commercial CPUs running a streamlined version of the offline CMS reconstruction software. The operational L1 output rate of 100 kHz, together with the number of CPUs in the HLT farm, imposes a fundamental constraint on the amount of time available for the HLT to process events. Exceeding this limit impacts the experiment's ability to collect data efficiently. Hence, there is a critical need to characterize the performance of the HLT farm as well as the algorithms run prior to start up in order to ensure optimal data taking. Additional complications arise from the fact that the HLT farm consists of multiple generations of hardware and there can be subtleties in machine performance. We present our methods of measuring the timing performance of the CMS HLT, including the challenges of making such measurements. Results for the performance of various Intel Xeon architectures from 2009-2014 and different data taking scenarios are also presented. (paper)

  9. Pulsed laser triggered high speed microfluidic switch

    Science.gov (United States)

    Wu, Ting-Hsiang; Gao, Lanyu; Chen, Yue; Wei, Kenneth; Chiou, Pei-Yu

    2008-10-01

    We report a high-speed microfluidic switch capable of achieving a switching time of 10 μs. The switching mechanism is realized by exciting dynamic vapor bubbles with focused laser pulses in a microfluidic polydimethylsiloxane (PDMS) channel. The bubble expansion deforms the elastic PDMS channel wall and squeezes the adjacent sample channel to control its fluid and particle flows as captured by the time-resolved imaging system. A switching of polystyrene microspheres in a Y-shaped channel has also been demonstrated. This ultrafast laser triggered switching mechanism has the potential to advance the sorting speed of state-of-the-art microscale fluorescence activated cell sorting devices.

  10. TRIGGER

    CERN Multimedia

    Wesley Smith

    Level-1 Trigger Hardware and Software The hardware of the trigger components has been mostly finished. The ECAL Endcap Trigger Concentrator Cards (TCC) are in production while Barrel TCC firmware has been upgraded, and the Trigger Primitives can now be stored by the Data Concentrator Card for readout by the DAQ. The Regional Calorimeter Trigger (RCT) system is complete, and the timing is being finalized. All 502 HCAL trigger links to RCT run without error. The HCAL muon trigger timing has been equalized with DT, RPC, CSC and ECAL. The hardware and firmware for the Global Calorimeter Trigger (GCT) jet triggers are being commissioned and data from these triggers is available for readout. The GCT energy sums from rings of trigger towers around the beam pipe beam have been changed to include two rings from both sides. The firmware for Drift Tube Track Finder, Barrel Sorter and Wedge Sorter has been upgraded, and the synchronization of the DT trigger is satisfactory. The CSC local trigger has operated flawlessly u...

  11. The CMS High Level Trigger System

    CERN Document Server

    Afaq, A; Bauer, G; Biery, K; Boyer, V; Branson, J; Brett, A; Cano, E; Carboni, A; Cheung, H; Ciganek, M; Cittolin, S; Dagenhart, W; Erhan, S; Gigi, D; Glege, F; Gómez-Reino, Robert; Gulmini, M; Gutiérrez-Mlot, E; Gutleber, J; Jacobs, C; Kim, J C; Klute, M; Kowalkowski, J; Lipeles, E; Lopez-Perez, Juan Antonio; Maron, G; Meijers, F; Meschi, E; Moser, R; Murray, S; Oh, A; Orsini, L; Paus, C; Petrucci, A; Pieri, M; Pollet, L; Rácz, A; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Sexton-Kennedy, E; Sumorok, K; Suzuki, I; Tsirigkas, D; Varela, J

    2007-01-01

    The CMS Data Acquisition (DAQ) System relies on a purely software driven High Level Trigger (HLT) to reduce the full Level-1 accept rate of 100 kHz to approximately 100 Hz for archiving and later offline analysis. The HLT operates on the full information of events assembled by an event builder collecting detector data from the CMS front-end systems. The HLT software consists of a sequence of reconstruction and filtering modules executed on a farm of O(1000) CPUs built from commodity hardware. This paper presents the architecture of the CMS HLT, which integrates the CMS reconstruction framework in the online environment. The mechanisms to configure, control, and monitor the Filter Farm and the procedures to validate the filtering code within the DAQ environment are described.

  12. TRIGGER

    CERN Multimedia

    Wesley Smith

    Level-1 Trigger Hardware and Software The trigger synchronization procedures for running with cosmic muons and operating with the LHC were reviewed during the May electronics week. Firmware maintenance issues were also reviewed. Link tests between the new ECAL endcap trigger concentrator cards (TCC48) and the Regional Calorimeter Trigger have been performed. Firmware for the energy sum triggers and an upgraded tau trigger of the Global Calorimeter Triggers has been developed and is under test. The optical fiber receiver boards for the Track-Finder trigger theta links of the DT chambers are now all installed. The RPC trigger is being made more robust by additional chamber and cable shielding and also by firmware upgrades. For the CSC’s the front-end and trigger motherboard firmware have been updated. New RPC patterns and DT/CSC lookup tables taking into account phi asymmetries in the magnetic field configuration are under study. The motherboard for the new pipeline synchronizer of the Global Trigg...

  13. TRIGGER

    CERN Multimedia

    W. Smith

    At the March meeting, the CMS trigger group reported on progress in production, tests in the Electronics Integration Center (EIC) in Prevessin 904, progress on trigger installation in the underground counting room at point 5, USC55, the program of trigger pattern tests and vertical slice tests and planning for the Global Runs starting this summer. The trigger group is engaged in the final stages of production testing, systems integration, and software and firmware development. Most systems are delivering final tested electronics to CERN. The installation in USC55 is underway and integration testing is in full swing. A program of orderly connection and checkout with subsystems and central systems has been developed. This program includes a series of vertical subsystem slice tests providing validation of a portion of each subsystem from front-end electronics through the trigger and DAQ to data captured and stored. After full checkout, trigger subsystems will be then operated in the CMS Global Runs. Continuous...

  14. TRIGGER

    CERN Multimedia

    Wesley Smith

    Level-1 Trigger Hardware and Software The production of the trigger hardware is now basically finished, and in time for the turn-on of the LHC. The last boards produced are the Trigger Concentrator Cards for the ECAL Endcaps (TCC-EE). After the recent installation of the four EE Dees, the TCC-EE prototypes were used for their commissioning. Production boards are arriving and are being tested continuously, with the last ones expected in November. The Regional Calorimeter Trigger hardware is fully integrated after installation of the last EE cables. Pattern tests from the HCAL up to the GCT have been performed successfully. The HCAL triggers are fully operational, including the connection of the HCAL-outer and forward-HCAL (HO/HF) technical triggers to the Global Trigger. The HCAL Trigger and Readout (HTR) board firmware has been updated to permit recording of the tower “feature bit” in the data. The Global Calorimeter Trigger hardware is installed, but some firmware developments are still n...

  15. Advanced Functionalities for Highly Reliable Optical Networks

    DEFF Research Database (Denmark)

    An, Yi

    This thesis covers two research topics concerning optical solutions for networks e.g. avionic systems. One is to identify the applications for silicon photonic devices for cost-effective solutions in short-range optical networks. The other one is to realise advanced functionalities in order...... to increase the availability of highly reliable optical networks. A cost-effective transmitter based on a directly modulated laser (DML) using a silicon micro-ring resonator (MRR) to enhance its modulation speed is proposed, analysed and experimentally demonstrated. A modulation speed enhancement from 10 Gbit...... interconnects and network-on-chips. A novel concept of all-optical protection switching scheme is proposed, where fault detection and protection trigger are all implemented in the optical domain. This scheme can provide ultra-fast establishment of the protection path resulting in a minimum loss of data...

  16. TRIGGER

    CERN Multimedia

    W. Smith

    2010-01-01

    Level-1 Trigger Hardware and Software The Level-1 Trigger hardware has performed well during both the recent proton-proton and heavy ion running. Efforts were made to improve the visibility and handling of alarms and warnings. The tracker ReTRI boards that prevent fixed frequencies of Level-1 Triggers are now configured through the Trigger Supervisor. The Global Calorimeter Trigger (GCT) team has introduced a buffer cleanup procedure at stops and a reset of the QPLL during configuring to ensure recalibration in case of a switch from the LHC clock to the local clock. A device to test the cables between the Regional Calorimeter Trigger and the GCT has been manufactured. A wrong charge bit was fixed in the CSC Trigger. The ECAL group is improving crystal masking and spike suppression in the trigger primitives. New firmware for the Drift Tube Track Finder (DTTF) sorters was developed to improve fake track tagging and sorting. Zero suppression was implemented in the DT Sector Collector readout. The track finder b...

  17. TRIGGER

    CERN Multimedia

    Wesley Smith

    Trigger Hardware The status of the trigger components was presented during the September CMS Week and Annual Review and at the monthly trigger meetings in October and November. Procedures for cold and warm starts (e.g. refreshing of trigger parameters stored in registers) of the trigger subsystems have been studied. Reviews of parts of the Global Calorimeter Trigger (GCT) and the Global Trigger (GT) have taken place in October and November. The CERN group summarized the status of the Trigger Timing and Control (TTC) system. All TTC crates and boards are installed in the underground counting room, USC55. The central clock system will be upgraded in December (after the Global Run at the end of November GREN) to the new RF2TTC LHC machine interface timing module. Migration of subsystem's TTC PCs to SLC4/ XDAQ 3.12 is being prepared. Work is on going to unify the access to Local Timing Control (LTC) and TTC CMS interface module (TTCci) via SOAP (Simple Object Access Protocol, a lightweight XML-based messaging ...

  18. High energy physics experiment triggers and the trustworthiness of software

    International Nuclear Information System (INIS)

    Nash, T.

    1991-10-01

    For all the time and frustration that high energy physicists expend interacting with computers, it is surprising that more attention is not paid to the critical role computers play in the science. With large, expensive colliding beam experiments now dependent on complex programs working at startup, questions of reliability -- the trustworthiness of software -- need to be addressed. This issue is most acute in triggers, used to select data to record -- and data to discard -- in the real time environment of an experiment. High level triggers are built on codes that now exceed 2 million source lines -- and for the first time experiments are truly dependent on them. This dependency will increase at the accelerators planned for the new millennium (SSC and LHC), where cost and other pressures will reduce tolerance for first run problems, and the high luminosities will make this on-line data selection essential. A sense of this incipient crisis motivated the unusual juxtaposition to topics in these lectures. 37 refs., 1 fig

  19. TRIGGER

    CERN Multimedia

    Wesley Smith

    Level-1 Trigger Hardware and Software The final parts of the Level-1 trigger hardware are now being put in place. For the ECAL endcaps, more than half of the Trigger Concentrator Cards for the ECAL Endcap (TCC-EE) are now available at CERN, such that one complete endcap can be covered. The Global Trigger now correctly handles ECAL calibration sequences, without being influenced by backpressure. The Regional Calorimeter Trigger (RCT) hardware is complete and working in USC55. Intra-crate tests of all 18 RCT crates and the Global Calorimeter Trigger (GCT) are regularly taking place. Pattern tests have successfully captured data from HCAL through RCT to the GCT Source Cards. HB/HE trigger data are being compared with emulator results to track down the very few remaining hardware problems. The treatment of hot and dead cells, including their recording in the database, has been defined. For the GCT, excellent agreement between the emulator and data has been achieved for jets and HF ET sums. There is still som...

  20. Highly reliable TOFD UT Technique

    International Nuclear Information System (INIS)

    Acharya, G.D.; Trivedi, S.A.R.; Pai, K.B.

    2003-01-01

    The high performance of the time of flight diffraction technique (TOFD) with regard to the detection capabilities of weld defects such as crack, slag, lack of fusion has led to a rapidly increasing acceptance of the technique as a pre?service inspection tool. Since the early 1990s TOFD has been applied to several projects, where it replaced the commonly used radiographic testing. The use of TOM lead to major time savings during new build and replacement projects. At the same time the TOFD technique was used as base line inspection, which enables monitoring in the future for critical welds, but also provides documented evidence for life?time. The TOFD technique as the ability to detect and simultaneously size flows of nearly any orientation within the weld and heat affected zone. TOM is recognized as a reliable, proven technique for detection and sizing of defects and proven to be a time saver, resulting in shorter shutdown periods and construction project times. Thus even in cases where inspection price of TOFD per welds is higher, in the end it will result in significantly lower overall costs and improve quality. This paper deals with reliability, economy, acceptance criteria and field experience. It also covers comparative study between radiography technique Vs. TOFD. (Author)

  1. The ATLAS High-Level Calorimeter Trigger in Run-2

    CERN Document Server

    Wiglesworth, Craig; The ATLAS collaboration

    2018-01-01

    The ATLAS Experiment uses a two-level triggering system to identify and record collision events containing a wide variety of physics signatures. It reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of 1 kHz, whilst maintaining high efficiency for interesting collision events. It is composed of an initial hardware-based level-1 trigger followed by a software-based high-level trigger. A central component of the high-level trigger is the calorimeter trigger. This is responsible for processing data from the electromagnetic and hadronic calorimeters in order to identify electrons, photons, taus, jets and missing transverse energy. In this talk I will present the performance of the high-level calorimeter trigger in Run-2, noting the improvements that have been made in response to the challenges of operating at high luminosity.

  2. The ATLAS trigger: high-level trigger commissioning and operation during early data taking

    International Nuclear Information System (INIS)

    Goncalo, R

    2008-01-01

    The ATLAS experiment is one of the two general-purpose experiments due to start operation soon at the Large Hadron Collider (LHC). The LHC will collide protons at a centre of mass energy of 14 TeV, with a bunch-crossing rate of 40 MHz. The ATLAS three-level trigger will reduce this input rate to match the foreseen offline storage capability of 100-200 Hz. This paper gives an overview of the ATLAS High Level Trigger focusing on the system design and its innovative features. We then present the ATLAS trigger strategy for the initial phase of LHC exploitation. Finally, we report on the valuable experience acquired through in-situ commissioning of the system where simulated events were used to exercise the trigger chain. In particular we show critical quantities such as event processing times, measured in a large-scale HLT farm using a complex trigger menu

  3. TRIGGER

    CERN Multimedia

    W. Smith

    At the December meeting, the CMS trigger group reported on progress in production, tests in the Electronics Integration Center (EIC) in Prevessin 904, progress on trigger installation in the underground counting room at point 5, USC55, and results from the Magnet Test and Cosmic Challenge (MTCC) phase II. The trigger group is engaged in the final stages of production testing, systems integration, and software and firmware development. Most systems are delivering final tested electronics to CERN. The installation in USC55 is underway and moving towards integration testing. A program of orderly connection and checkout with subsystems and central systems has been developed. This program includes a series of vertical subsystem slice tests providing validation of a portion of each subsystem from front-end electronics through the trigger and DAQ to data captured and stored. This is combined with operations and testing without beam that will continue until startup. The plans for start-up, pilot and early running tri...

  4. TRIGGER

    CERN Multimedia

    Wesley Smith

    2011-01-01

    Level-1 Trigger Hardware and Software New Forward Scintillating Counters (FSC) for rapidity gap measurements have been installed and integrated into the Trigger recently. For the Global Muon Trigger, tuning of quality criteria has led to improvements in muon trigger efficiencies. Several subsystems have started campaigns to increase spares by recovering boards or producing new ones. The barrel muon sector collector test system has been reactivated, new η track finder boards are in production, and φ track finder boards are under revision. In the CSC track finder, an η asymmetry problem has been corrected. New pT look-up tables have also improved efficiency. RPC patterns were changed from four out of six coincident layers to three out of six in the barrel, which led to a significant increase in efficiency. A new PAC firmware to trigger on heavy stable charged particles allows looking for chamber hit coincidences in two consecutive bunch-crossings. The redesign of the L1 Trigger Emulator...

  5. TRIGGER

    CERN Multimedia

    W. Smith from contributions of C. Leonidopoulos, I. Mikulec, J. Varela and C. Wulz.

    Level-1 Trigger Hardware and Software Over the past few months, the Level-1 trigger has successfully recorded data with cosmic rays over long continuous stretches as well as LHC splash events, beam halo, and collision events. The L1 trigger hardware, firmware, synchronization, performance and readiness for beam operation were reviewed in October. All L1 trigger hardware is now installed at Point 5, and most of it is completely commissioned. While the barrel ECAL Trigger Concentrator Cards are fully operational, the recently delivered endcap ECAL TCC system is still being commissioned. For most systems there is a sufficient number of spares available, but for a few systems additional reserve modules are needed. It was decided to increase the overall L1 latency by three bunch crossings to increase the safety margin for trigger timing adjustments. In order for CMS to continue data taking during LHC frequency ramps, the clock distribution tree needs to be reset. The procedures for this have been tested. A repl...

  6. TRIGGER

    CERN Multimedia

    W. Smith

    Level-1 Trigger Hardware and Software The road map for the final commissioning of the level-1 trigger system has been set. The software for the trigger subsystems is being upgraded to run under CERN Scientific Linux 4 (SLC4). There is also a new release for the Trigger Supervisor (TS 1.4), which implies upgrade work by the subsystems. As reported by the CERN group, a campaign to tidy the Trigger Timing and Control (TTC) racks has begun. The machine interface was upgraded by installing the new RF2TTC module, which receives RF signals from LHC Point 4. Two Beam Synchronous Timing (BST) signals, one for each beam, can now be received in CMS. The machine group will define the exact format of the information content shortly. The margin on the locking range of the CMS QPLL is planned for study for different subsystems in the next Global Runs, using a function generator. The TTC software has been successfully tested on SLC4. Some TTC subsystems have already been upgraded to SLC4. The TTCci Trigger Supervisor ...

  7. A Scalable and Reliable Message Transport Service for the ATLAS Trigger and Data Acquisition System

    CERN Document Server

    Kazarov, A; The ATLAS collaboration; Kolos, S; Lehmann Miotto, G; Soloviev, I

    2014-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) is a large distributed computing system composed of several thousands of interconnected computers and tens of thousands applications. During a run, TDAQ applications produce a lot of control and information messages with variable rates, addressed to TDAQ operators or to other applications. Reliable, fast and accurate delivery of the messages is important for the functioning of the whole TDAQ system. The Message Transport Service (MTS) provides facilities for the reliable transport, the filtering and the routing of the messages, basing on publish-subscribe-notify communication pattern with content-based message filtering. During the ongoing LHC shutdown, the MTS was re-implemented, taking into account important requirements like reliability, scalability and performance, handling of slow subscribers case and also simplicity of the design and the implementation. MTS uses CORBA middleware, a common layer for TDAQ infrastructure, and provides sending/subscribing APIs i...

  8. TRIGGER

    CERN Multimedia

    by Wesley Smith

    2011-01-01

    Level-1 Trigger Hardware and Software After the winter shutdown minor hardware problems in several subsystems appeared and were corrected. A reassessment of the overall latency has been made. In the TTC system shorter cables between TTCci and TTCex have been installed, which saved one bunch crossing, but which may have required an adjustment of the RPC timing. In order to tackle Pixel out-of-syncs without influencing other subsystems, a special hardware/firmware re-sync protocol has been introduced in the Global Trigger. The link between the Global Calorimeter Trigger and the Global Trigger with the new optical Global Trigger Interface and optical receiver daughterboards has been successfully tested in the Electronics Integration Centre in building 904. New firmware in the GCT now allows a setting to remove the HF towers from energy sums. The HF sleeves have been replaced, which should lead to reduced rates of anomalous signals, which may allow their inclusion after this is validated. For ECAL, improvements i...

  9. The ATLAS online High Level Trigger framework experience reusing offline software components in the ATLAS trigger

    CERN Document Server

    Wiedenmann, W

    2009-01-01

    Event selection in the Atlas High Level Trigger is accomplished to a large extent by reusing software components and event selection algorithms developed and tested in an offline environment. Many of these offline software modules are not specifically designed to run in a heavily multi-threaded online data flow environment. The Atlas High Level Trigger (HLT) framework based on the Gaudi and Atlas Athena frameworks, forms the interface layer, which allows the execution of the HLT selection and monitoring code within the online run control and data flow software. While such an approach provides a unified environment for trigger event selection across all of Atlas, it also poses strict requirements on the reused software components in terms of performance, memory usage and stability. Experience of running the HLT selection software in the different environments and especially on large multi-node trigger farms has been gained in several commissioning periods using preloaded Monte Carlo events, in data taking peri...

  10. Test-retest reliability of myofascial trigger point detection in hip and thigh areas.

    Science.gov (United States)

    Rozenfeld, E; Finestone, A S; Moran, U; Damri, E; Kalichman, L

    2017-10-01

    Myofascial trigger points (MTrP's) are a primary source of pain in patients with musculoskeletal disorders. Nevertheless, they are frequently underdiagnosed. Reliable MTrP palpation is the necessary for their diagnosis and treatment. The few studies that have looked for intra-tester reliability of MTrPs detection in upper body, provide preliminary evidence that MTrP palpation is reliable. Reliability tests for MTrP palpation on the lower limb have not yet been performed. To evaluate inter- and intra-tester reliability of MTrP recognition in hip and thigh muscles. Reliability study. 21 patients (15 males and 6 females, mean age 21.1 years) referred to the physical therapy clinic, 10 with knee or hip pain and 11 with pain in an upper limb, low back, shin or ankle. Two experienced physical therapists performed the examinations, blinded to the subjects' identity, medical condition and results of the previous MTrP evaluation. Each subject was evaluated four times, twice by each examiner in a random order. Dichotomous findings included a palpable taut band, tenderness, referred pain, and relevance of referred pain to patient's complaint. Based on these, diagnosis of latent MTrP's or active MTrP's was established. The evaluation was performed on both legs and included a total of 16 locations in the following muscles: rectus femoris (proximal), vastus medialis (middle and distal), vastus lateralis (middle and distal) and gluteus medius (anterior, posterior and distal). Inter- and intra-tester reliability (Cohen's kappa (κ)) values for single sites ranged from -0.25 to 0.77. Median intra-tester reliability was 0.45 and 0.46 for latent and active MTrP's, and median inter-tester reliability was 0.51 and 0.64 for latent and active MTrPs, respectively. The examination of the distal vastus medialis was most reliable for latent and active MTrP's (intra-tester k = 0.27-0.77, inter-tester k = 0.77 and intra-tester k = 0.53-0.72, inter-tester k = 0.72, correspondingly

  11. TRIGGER

    CERN Multimedia

    W. Smith

    Level-1 Trigger Hardware The CERN group is working on the TTC system. Seven out of nine sub-detector TTC VME crates with all fibers cabled are installed in USC55. 17 Local Trigger Controller (LTC) boards have been received from production and are in the process of being tested. The RF2TTC module replacing the TTCmi machine interface has been delivered and will replace the TTCci module used to mimic the LHC clock. 11 out of 12 crates housing the barrel ECAL off-detector electronics have been installed in USC55 after commissioning at the Electronics Integration Centre in building 904. The cabling to the Regional Calorimeter Trigger (RCT) is terminated. The Lisbon group has completed the Synchronization and Link mezzanine board (SLB) production. The Palaiseau group has fully tested and installed 33 out of 40 Trigger Concentrator Cards (TCC). The seven remaining boards are being remade. The barrel TCC boards have been tested at the H4 test beam, and good agreement with emulator predictions were found. The cons...

  12. The ATLAS High Level Trigger Steering Framework and the Trigger 
Configuration System.

    CERN Document Server

    Pérez Cavalcanti, Tiago; The ATLAS collaboration

    2011-01-01

    The ATLAS High Level Trigger Steering Framework and the Trigger 
Configuration System.
 
The ATLAS detector system installed in the Large Hadron Collider (LHC) 
at CERN is designed to study proton-proton and nucleus-nucleus 
collisions with a maximum center of mass energy of 14 TeV at a bunch 
collision rate of 40MHz.  In March 2010 the four LHC experiments saw 
the first proton-proton collisions at 7 TeV. Still within the year a 
collision rate of nearly 10 MHz is expected. At ATLAS, events of 
potential interest for ATLAS physics are selected by a three-level 
trigger system, with a final recording rate of about 200 Hz. The first 
level (L1) is implemented in custom hardware; the two levels of 
the high level trigger (HLT) are software triggers, running on large 
farms of standard computers and network devices. 

Within the ATLAS physics program more than 500 trigger signatures are 
defined. The HLT tests each signature on each L1-accepted event; the 
test outcome is recor...

  13. High reliability low jitter 80 kV pulse generator

    International Nuclear Information System (INIS)

    Savage, Mark Edward; Stoltzfus, Brian Scott

    2009-01-01

    Switching can be considered to be the essence of pulsed power. Time accurate switch/trigger systems with low inductance are useful in many applications. This article describes a unique switch geometry coupled with a low-inductance capacitive energy store. The system provides a fast-rising high voltage pulse into a low impedance load. It can be challenging to generate high voltage (more than 50 kilovolts) into impedances less than 10 (Omega), from a low voltage control signal with a fast rise time and high temporal accuracy. The required power amplification is large, and is usually accomplished with multiple stages. The multiple stages can adversely affect the temporal accuracy and the reliability of the system. In the present application, a highly reliable and low jitter trigger generator was required for the Z pulsed-power facility [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats,J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K.W. Struve, W.A. Stygar, L.K. Warne, and J. R. Woodworth, 2007 IEEE Pulsed Power Conference, Albuquerque, NM (IEEE, Piscataway, NJ, 2007), p. 979]. The large investment in each Z experiment demands low prefire probability and low jitter simultaneously. The system described here is based on a 100 kV DC-charged high-pressure spark gap, triggered with an ultraviolet laser. The system uses a single optical path for simultaneously triggering two parallel switches, allowing lower inductance and electrode erosion with a simple optical system. Performance of the system includes 6 ns output rise time into 5.6 (Omega), 550 ps one-sigma jitter measured from the 5 V trigger to the high voltage output, and misfire probability less than 10 -4 . The design of the system and some key measurements will be shown in the paper. We will discuss the design goals related to high reliability and low jitter. While

  14. Progress in the High Level Trigger Integration

    CERN Multimedia

    Cristobal Padilla

    2007-01-01

    During the week from March 19th to March 23rd, the DAQ/HLT group performed another of its technical runs. On this occasion the focus was on integrating the Level 2 and Event Filter triggers, with a much fuller integration of HLT components than had been done previously. For the first time this included complete trigger slices, with a menu to run the selection algorithms for muons, electrons, jets and taus at the Level-2 and Event Filter levels. This Technical run again used the "Pre-Series" system (a vertical slice prototype of the DAQ/HLT system, see the ATLAS e-news January issue for details). Simulated events, provided by our colleagues working in the streaming tests, were pre-loaded into the ROS (Read Out System) nodes. These are the PC's where the data from the detector is stored after coming out of the front-end electronics, the "first part of the TDAQ system" and the interface to the detectors. These events used a realistic beam interaction mixture and had been subjected to a Level-1 selection. The...

  15. Reliability analysis of multi-trigger binary systems subject to competing failures

    International Nuclear Information System (INIS)

    Wang, Chaonan; Xing, Liudong; Levitin, Gregory

    2013-01-01

    This paper suggests two combinatorial algorithms for the reliability analysis of multi-trigger binary systems subject to competing failure propagation and failure isolation effects. Propagated failure with global effect (PFGE) is referred to as a failure that not only causes outage to the component from which the failure originates, but also propagates through all other system components causing the entire system failure. However, the propagation effect from the PFGE can be isolated in systems with functional dependence (FDEP) behavior. This paper studies two distinct consequences of PFGE resulting from a competition in the time domain between the failure isolation and failure propagation effects. As compared to existing works on competing failures that are limited to systems with a single FDEP group, this paper considers more complicated cases where the systems have multiple dependent FDEP groups. Analysis of such systems is more challenging because both the occurrence order between the trigger failure event and PFGE from the dependent components and the occurrence order among the multiple trigger failure events have to be considered. Two combinatorial and analytical algorithms are proposed. Both of them have no limitation on the type of time-to-failure distributions for the system components. Their correctness is verified using a Markov-based method. An example of memory systems is analyzed to demonstrate and compare the applications and advantages of the two proposed algorithms. - Highlights: ► Reliability of binary systems with multiple dependent functional dependence groups is analyzed. ► Competing failure propagation and failure isolation effect is considered. ► The proposed algorithms are combinatorial and applicable to any arbitrary type of time-to-failure distributions for system components.

  16. The ATLAS online High Level Trigger framework: Experience reusing offline software components in the ATLAS trigger

    International Nuclear Information System (INIS)

    Wiedenmann, Werner

    2010-01-01

    Event selection in the ATLAS High Level Trigger is accomplished to a large extent by reusing software components and event selection algorithms developed and tested in an offline environment. Many of these offline software modules are not specifically designed to run in a heavily multi-threaded online data flow environment. The ATLAS High Level Trigger (HLT) framework based on the GAUDI and ATLAS ATHENA frameworks, forms the interface layer, which allows the execution of the HLT selection and monitoring code within the online run control and data flow software. While such an approach provides a unified environment for trigger event selection across all of ATLAS, it also poses strict requirements on the reused software components in terms of performance, memory usage and stability. Experience of running the HLT selection software in the different environments and especially on large multi-node trigger farms has been gained in several commissioning periods using preloaded Monte Carlo events, in data taking periods with cosmic events and in a short period with proton beams from LHC. The contribution discusses the architectural aspects of the HLT framework, its performance and its software environment within the ATLAS computing, trigger and data flow projects. Emphasis is also put on the architectural implications for the software by the use of multi-core processors in the computing farms and the experiences gained with multi-threading and multi-process technologies.

  17. Commissioning of the CMS High-Level Trigger with Cosmic Rays

    CERN Document Server

    Chatrchyan, S; Sirunyan, A M; Adam, W; Arnold, B; Bergauer, H; Bergauer, T; Dragicevic, M; Eichberger, M; Erö, J; Friedl, M; Frühwirth, R; Ghete, V M; Hammer, J; Hänsel, S; Hoch, M; Hörmann, N; Hrubec, J; Jeitler, M; Kasieczka, G; Kastner, K; Krammer, M; Liko, D; Magrans de Abril, I; Mikulec, I; Mittermayr, F; Neuherz, B; Oberegger, M; Padrta, M; Pernicka, M; Rohringer, H; Schmid, S; Schöfbeck, R; Schreiner, T; Stark, R; Steininger, H; Strauss, J; Taurok, A; Teischinger, F; Themel, T; Uhl, D; Wagner, P; Waltenberger, W; Walzel, G; Widl, E; Wulz, C E; Chekhovsky, V; Dvornikov, O; Emeliantchik, I; Litomin, A; Makarenko, V; Marfin, I; Mossolov, V; Shumeiko, N; Solin, A; Stefanovitch, R; Suarez Gonzalez, J; Tikhonov, A; Fedorov, A; Karneyeu, A; Korzhik, M; Panov, V; Zuyeuski, R; Kuchinsky, P; Beaumont, W; Benucci, L; Cardaci, M; De Wolf, E A; Delmeire, E; Druzhkin, D; Hashemi, M; Janssen, X; Maes, T; Mucibello, L; Ochesanu, S; Rougny, R; Selvaggi, M; Van Haevermaet, H; Van Mechelen, P; Van Remortel, N; Adler, V; Beauceron, S; Blyweert, S; D'Hondt, J; De Weirdt, S; Devroede, O; Heyninck, J; Kalogeropoulos, A; Maes, J; Maes, M; Mozer, M U; Tavernier, S; Van Doninck, W; Van Mulders, P; Villella, I; Bouhali, O; Chabert, E C; Charaf, O; Clerbaux, B; De Lentdecker, G; Dero, V; Elgammal, S; Gay, A P R; Hammad, G H; Marage, P E; Rugovac, S; Vander Velde, C; Vanlaer, P; Wickens, J; Grunewald, M; Klein, B; Marinov, A; Ryckbosch, D; Thyssen, F; Tytgat, M; Vanelderen, L; Verwilligen, P; Basegmez, S; Bruno, G; Caudron, J; Delaere, C; Demin, P; Favart, D; Giammanco, A; Grégoire, G; Lemaitre, V; Militaru, O; Ovyn, S; Piotrzkowski, K; Quertenmont, L; Schul, N; Beliy, N; Daubie, E; Alves, G A; Pol, M E; Souza, M H G; Carvalho, W; De Jesus Damiao, D; De Oliveira Martins, C; Fonseca De Souza, S; Mundim, L; Oguri, V; Santoro, A; Silva Do Amaral, S M; Sznajder, A; Fernandez Perez Tomei, T R; Ferreira Dias, M A; Gregores, E M; Novaes, S F; Abadjiev, K; Anguelov, T; Damgov, J; Darmenov, N; Dimitrov, L; Genchev, V; Iaydjiev, P; Piperov, S; Stoykova, S; Sultanov, G; Trayanov, R; Vankov, I; Dimitrov, A; Dyulendarova, M; Kozhuharov, V; Litov, L; Marinova, E; Mateev, M; Pavlov, B; Petkov, P; Toteva, Z; Chen, G M; Chen, H S; Guan, W; Jiang, C H; Liang, D; Liu, B; Meng, X; Tao, J; Wang, J; Wang, Z; Xue, Z; Zhang, Z; Ban, Y; Cai, J; Ge, Y; Guo, S; Hu, Z; Mao, Y; Qian, S J; Teng, H; Zhu, B; Avila, C; Baquero Ruiz, M; Carrillo Montoya, C A; Gomez, A; Gomez Moreno, B; Ocampo Rios, A A; Osorio Oliveros, A F; Reyes Romero, D; Sanabria, J C; Godinovic, N; Lelas, K; Plestina, R; Polic, D; Puljak, I; Antunovic, Z; Dzelalija, M; Brigljevic, V; Duric, S; Kadija, K; Morovic, S; Fereos, R; Galanti, M; Mousa, J; Papadakis, A; Ptochos, F; Razis, P A; Tsiakkouri, D; Zinonos, Z; Hektor, A; Kadastik, M; Kannike, K; Müntel, M; Raidal, M; Rebane, L; Anttila, E; Czellar, S; Härkönen, J; Heikkinen, A; Karimäki, V; Kinnunen, R; Klem, J; Kortelainen, M J; Lampén, T; Lassila-Perini, K; Lehti, S; Lindén, T; Luukka, P; Mäenpää, T; Nysten, J; Tuominen, E; Tuominiemi, J; Ungaro, D; Wendland, L; Banzuzi, K; Korpela, A; Tuuva, T; Nedelec, P; Sillou, D; Besancon, M; Chipaux, R; Dejardin, M; Denegri, D; Descamps, J; Fabbro, B; Faure, J L; Ferri, F; Ganjour, S; Gentit, F X; Givernaud, A; Gras, P; Hamel de Monchenault, G; Jarry, P; Lemaire, M C; Locci, E; Malcles, J; Marionneau, M; Millischer, L; Rander, J; Rosowsky, A; Rousseau, D; Titov, M; Verrecchia, P; Baffioni, S; Bianchini, L; Bluj, M; Busson, P; Charlot, C; Dobrzynski, L; Granier de Cassagnac, R; Haguenauer, M; Miné, P; Paganini, P; Sirois, Y; Thiebaux, C; Zabi, A; Agram, J L; Besson, A; Bloch, D; Bodin, D; Brom, J M; Conte, E; Drouhin, F; Fontaine, J C; Gelé, D; Goerlach, U; Gross, L; Juillot, P; Le Bihan, A C; Patois, Y; Speck, J; Van Hove, P; Baty, C; Bedjidian, M; Blaha, J; Boudoul, G; Brun, H; Chanon, N; Chierici, R; Contardo, D; Depasse, P; Dupasquier, T; El Mamouni, H; Fassi, F; Fay, J; Gascon, S; Ille, B; Kurca, T; Le Grand, T; Lethuillier, M; Lumb, N; Mirabito, L; Perries, S; Vander Donckt, M; Verdier, P; Djaoshvili, N; Roinishvili, N; Roinishvili, V; Amaglobeli, N; Adolphi, R; Anagnostou, G; Brauer, R; Braunschweig, W; Edelhoff, M; Esser, H; Feld, L; Karpinski, W; Khomich, A; Klein, K; Mohr, N; Ostaptchouk, A; Pandoulas, D; Pierschel, G; Raupach, F; Schael, S; Schultz von Dratzig, A; Schwering, G; Sprenger, D; Thomas, M; Weber, M; Wittmer, B; Wlochal, M; Actis, O; Altenhöfer, G; Bender, W; Biallass, P; Erdmann, M; Fetchenhauer, G; Frangenheim, J; Hebbeker, T; Hilgers, G; Hinzmann, A; Hoepfner, K; Hof, C; Kirsch, M; Klimkovich, T; Kreuzer, P; Lanske, D; Merschmeyer, M; Meyer, A; Philipps, B; Pieta, H; Reithler, H; Schmitz, S A; Sonnenschein, L; Sowa, M; Steggemann, J; Szczesny, H; Teyssier, D; Zeidler, C; Bontenackels, M; Davids, M; Duda, M; Flügge, G; Geenen, H; Giffels, M; Haj Ahmad, W; Hermanns, T; Heydhausen, D; Kalinin, S; Kress, T; Linn, A; Nowack, A; Perchalla, L; Poettgens, M; Pooth, O; Sauerland, P; Stahl, A; Tornier, D; Zoeller, M H; Aldaya Martin, M; Behrens, U; Borras, K; Campbell, A; Castro, E; Dammann, D; Eckerlin, G; Flossdorf, A; Flucke, G; Geiser, A; Hatton, D; Hauk, J; Jung, H; Kasemann, M; Katkov, I; Kleinwort, C; Kluge, H; Knutsson, A; Kuznetsova, E; Lange, W; Lohmann, W; Mankel, R; Marienfeld, M; Meyer, A B; Miglioranzi, S; Mnich, J; Ohlerich, M; Olzem, J; Parenti, A; Rosemann, C; Schmidt, R; Schoerner-Sadenius, T; Volyanskyy, D; Wissing, C; Zeuner, W D; Autermann, C; Bechtel, F; Draeger, J; Eckstein, D; Gebbert, U; Kaschube, K; Kaussen, G; Klanner, R; Mura, B; Naumann-Emme, S; Nowak, F; Pein, U; Sander, C; Schleper, P; Schum, T; Stadie, H; Steinbrück, G; Thomsen, J; Wolf, R; Bauer, J; Blüm, P; Buege, V; Cakir, A; Chwalek, T; De Boer, W; Dierlamm, A; Dirkes, G; Feindt, M; Felzmann, U; Frey, M; Furgeri, A; Gruschke, J; Hackstein, C; Hartmann, F; Heier, S; Heinrich, M; Held, H; Hirschbuehl, D; Hoffmann, K H; Honc, S; Jung, C; Kuhr, T; Liamsuwan, T; Martschei, D; Mueller, S; Müller, Th; Neuland, M B; Niegel, M; Oberst, O; Oehler, A; Ott, J; Peiffer, T; Piparo, D; Quast, G; Rabbertz, K; Ratnikov, F; Ratnikova, N; Renz, M; Saout, C; Sartisohn, G; Scheurer, A; Schieferdecker, P; Schilling, F P; Schott, G; Simonis, H J; Stober, F M; Sturm, P; Troendle, D; Trunov, A; Wagner, W; Wagner-Kuhr, J; Zeise, M; Zhukov, V; Ziebarth, E B; Daskalakis, G; Geralis, T; Karafasoulis, K; Kyriakis, A; Loukas, D; Markou, A; Markou, C; Mavrommatis, C; Petrakou, E; Zachariadou, A; Gouskos, L; Katsas, P; Panagiotou, A; Evangelou, I; Kokkas, P; Manthos, N; Papadopoulos, I; Patras, V; Triantis, F A; Bencze, G; Boldizsar, L; Debreczeni, G; Hajdu, C; Hernath, S; Hidas, P; Horvath, D; Krajczar, K; Laszlo, A; Patay, G; Sikler, F; Toth, N; Vesztergombi, G; Beni, N; Christian, G; Imrek, J; Molnar, J; Novak, D; Palinkas, J; Szekely, G; Szillasi, Z; Tokesi, K; Veszpremi, V; Kapusi, A; Marian, G; Raics, P; Szabo, Z; Trocsanyi, Z L; Ujvari, B; Zilizi, G; Bansal, S; Bawa, H S; Beri, S B; Bhatnagar, V; Jindal, M; Kaur, M; Kaur, R; Kohli, J M; Mehta, M Z; Nishu, N; Saini, L K; Sharma, A; Singh, A; Singh, J B; Singh, S P; Ahuja, S; Arora, S; Bhattacharya, S; Chauhan, S; Choudhary, B C; Gupta, P; Jain, S; Jha, M; Kumar, A; Ranjan, K; Shivpuri, R K; Srivastava, A K; Choudhury, R K; Dutta, D; Kailas, S; Kataria, S K; Mohanty, A K; Pant, L M; Shukla, P; Topkar, A; Aziz, T; Guchait, M; Gurtu, A; Maity, M; Majumder, D; Majumder, G; Mazumdar, K; Nayak, A; Saha, A; Sudhakar, K; Banerjee, S; Dugad, S; Mondal, N K; Arfaei, H; Bakhshiansohi, H; Fahim, A; Jafari, A; Mohammadi Najafabadi, M; Moshaii, A; Paktinat Mehdiabadi, S; Rouhani, S; Safarzadeh, B; Zeinali, M; Felcini, M; Abbrescia, M; Barbone, L; Chiumarulo, F; Clemente, A; Colaleo, A; Creanza, D; Cuscela, G; De Filippis, N; De Palma, M; De Robertis, G; Donvito, G; Fedele, F; Fiore, L; Franco, M; Iaselli, G; Lacalamita, N; Loddo, F; Lusito, L; Maggi, G; Maggi, M; Manna, N; Marangelli, B; My, S; Natali, S; Nuzzo, S; Papagni, G; Piccolomo, S; Pierro, G A; Pinto, C; Pompili, A; Pugliese, G; Rajan, R; Ranieri, A; Romano, F; Roselli, G; Selvaggi, G; Shinde, Y; Silvestris, L; Tupputi, S; Zito, G; Abbiendi, G; Bacchi, W; Benvenuti, A C; Boldini, M; Bonacorsi, D; Braibant-Giacomelli, S; Cafaro, V D; Caiazza, S S; Capiluppi, P; Castro, A; Cavallo, F R; Codispoti, G; Cuffiani, M; D'Antone, I; Dallavalle, G M; Fabbri, F; Fanfani, A; Fasanella, D; Giacomelli, P; Giordano, V; Giunta, M; Grandi, C; Guerzoni, M; Marcellini, S; Masetti, G; Montanari, A; Navarria, F L; Odorici, F; Pellegrini, G; Perrotta, A; Rossi, A M; Rovelli, T; Siroli, G; Torromeo, G; Travaglini, R; Albergo, S; Costa, S; Potenza, R; Tricomi, A; Tuve, C; Barbagli, G; Broccolo, G; Ciulli, V; Civinini, C; D'Alessandro, R; Focardi, E; Frosali, S; Gallo, E; Genta, C; Landi, G; Lenzi, P; Meschini, M; Paoletti, S; Sguazzoni, G; Tropiano, A; Benussi, L; Bertani, M; Bianco, S; Colafranceschi, S; Colonna, D; Fabbri, F; Giardoni, M; Passamonti, L; Piccolo, D; Pierluigi, D; Ponzio, B; Russo, A; Fabbricatore, P; Musenich, R; Benaglia, A; Calloni, M; Cerati, G B; D'Angelo, P; De Guio, F; Farina, F M; Ghezzi, A; Govoni, P; Malberti, M; Malvezzi, S; Martelli, A; Menasce, D; Miccio, V; Moroni, L; Negri, P; Paganoni, M; Pedrini, D; Pullia, A; Ragazzi, S; Redaelli, N; Sala, S; Salerno, R; Tabarelli de Fatis, T; Tancini, V; Taroni, S; Buontempo, S; Cavallo, N; Cimmino, A; De Gruttola, M; Fabozzi, F; Iorio, A O M; Lista, L; Lomidze, D; Noli, P; Paolucci, P; Sciacca, C; Azzi, P; Bacchetta, N; Barcellan, L; Bellan, P; Bellato, M; Benettoni, M; Biasotto, M; Bisello, D; Borsato, E; Branca, A; Carlin, R; Castellani, L; Checchia, P; Conti, E; Dal Corso, F; De Mattia, M; Dorigo, T; Dosselli, U; Fanzago, F; Gasparini, F; Gasparini, U; Giubilato, P; Gonella, F; Gresele, A; Gulmini, M; Kaminskiy, A; Lacaprara, S; Lazzizzera, I; Margoni, M; Maron, G; Mattiazzo, S; Mazzucato, M; Meneghelli, M; Meneguzzo, A T; Michelotto, M; Montecassiano, F; Nespolo, M; Passaseo, M; Pegoraro, M; Perrozzi, L; Pozzobon, N; Ronchese, P; Simonetto, F; Toniolo, N; Torassa, E; Tosi, M; Triossi, A; Vanini, S; Ventura, S; Zotto, P; Zumerle, G; Baesso, P; Berzano, U; Bricola, S; Necchi, M M; Pagano, D; Ratti, S P; Riccardi, C; Torre, P; Vicini, A; Vitulo, P; Viviani, C; Aisa, D; Aisa, S; Babucci, E; Biasini, M; Bilei, G M; Caponeri, B; Checcucci, B; Dinu, N; Fanò, L; Farnesini, L; Lariccia, P; Lucaroni, A; Mantovani, G; Nappi, A; Piluso, A; Postolache, V; Santocchia, A; Servoli, L; Tonoiu, D; Vedaee, A; Volpe, R; Azzurri, P; Bagliesi, G; Bernardini, J; Berretta, L; Boccali, T; Bocci, A; Borrello, L; Bosi, F; Calzolari, F; Castaldi, R; Dell'Orso, R; Fiori, F; Foà, L; Gennai, S; Giassi, A; Kraan, A; Ligabue, F; Lomtadze, T; Mariani, F; Martini, L; Massa, M; Messineo, A; Moggi, A; Palla, F; Palmonari, F; Petragnani, G; Petrucciani, G; Raffaelli, F; Sarkar, S; Segneri, G; Serban, A T; Spagnolo, P; Tenchini, R; Tolaini, S; Tonelli, G; Venturi, A; Verdini, P G; Baccaro, S; Barone, L; Bartoloni, A; Cavallari, F; Dafinei, I; Del Re, D; Di Marco, E; Diemoz, M; Franci, D; Longo, E; Organtini, G; Palma, A; Pandolfi, F; Paramatti, R; Pellegrino, F; Rahatlou, S; Rovelli, C; Alampi, G; Amapane, N; Arcidiacono, R; Argiro, S; Arneodo, M; Biino, C; Borgia, M A; Botta, C; Cartiglia, N; Castello, R; Cerminara, G; Costa, M; Dattola, D; Dellacasa, G; Demaria, N; Dughera, G; Dumitrache, F; Graziano, A; Mariotti, C; Marone, M; Maselli, S; Migliore, E; Mila, G; Monaco, V; Musich, M; Nervo, M; Obertino, M M; Oggero, S; Panero, R; Pastrone, N; Pelliccioni, M; Romero, A; Ruspa, M; Sacchi, R; Solano, A; Staiano, A; Trapani, P P; Trocino, D; Vilela Pereira, A; Visca, L; Zampieri, A; Ambroglini, F; Belforte, S; Cossutti, F; Della Ricca, G; Gobbo, B; Penzo, A; Chang, S; Chung, J; Kim, D H; Kim, G N; Kong, D J; Park, H; Son, D C; Bahk, S Y; Song, S; Jung, S Y; Hong, B; Kim, H; Kim, J H; Lee, K S; Moon, D H; Park, S K; Rhee, H B; Sim, K S; Kim, J; Choi, M; Hahn, G; Park, I C; Choi, S; Choi, Y; Goh, J; Jeong, H; Kim, T J; Lee, J; Lee, S; Janulis, M; Martisiute, D; Petrov, P; Sabonis, T; Castilla Valdez, H; Sánchez Hernández, A; Carrillo Moreno, S; Morelos Pineda, A; Allfrey, P; Gray, R N C; Krofcheck, D; Bernardino Rodrigues, N; Butler, P H; Signal, T; Williams, J C; Ahmad, M; Ahmed, I; Ahmed, W; Asghar, M I; Awan, M I M; Hoorani, H R; Hussain, I; Khan, W A; Khurshid, T; Muhammad, S; Qazi, S; Shahzad, H; Cwiok, M; Dabrowski, R; Dominik, W; Doroba, K; Konecki, M; Krolikowski, J; Pozniak, K; Romaniuk, Ryszard; Zabolotny, W; Zych, P; Frueboes, T; Gokieli, R; Goscilo, L; Górski, M; Kazana, M; Nawrocki, K; Szleper, M; Wrochna, G; Zalewski, P; Almeida, N; Antunes Pedro, L; Bargassa, P; David, A; Faccioli, P; Ferreira Parracho, P G; Freitas Ferreira, M; Gallinaro, M; Guerra Jordao, M; Martins, P; Mini, G; Musella, P; Pela, J; Raposo, L; Ribeiro, P Q; Sampaio, S; Seixas, J; Silva, J; Silva, P; Soares, D; Sousa, M; Varela, J; Wöhri, H K; Altsybeev, I; Belotelov, I; Bunin, P; Ershov, Y; Filozova, I; Finger, M; Finger, M., Jr.; Golunov, A; Golutvin, I; Gorbounov, N; Kalagin, V; Kamenev, A; Karjavin, V; Konoplyanikov, V; Korenkov, V; Kozlov, G; Kurenkov, A; Lanev, A; Makankin, A; Mitsyn, V V; Moisenz, P; Nikonov, E; Oleynik, D; Palichik, V; Perelygin, V; Petrosyan, A; Semenov, R; Shmatov, S; Smirnov, V; Smolin, D; Tikhonenko, E; Vasil'ev, S; Vishnevskiy, A; Volodko, A; Zarubin, A; Zhiltsov, V; Bondar, N; Chtchipounov, L; Denisov, A; Gavrikov, Y; Gavrilov, G; Golovtsov, V; Ivanov, Y; Kim, V; Kozlov, V; Levchenko, P; Obrant, G; Orishchin, E; Petrunin, A; Shcheglov, Y; Shchetkovskiy, A; Sknar, V; Smirnov, I; Sulimov, V; Tarakanov, V; Uvarov, L; Vavilov, S; Velichko, G; Volkov, S; Vorobyev, A; Andreev, Yu; Anisimov, A; Antipov, P; Dermenev, A; Gninenko, S; Golubev, N; Kirsanov, M; Krasnikov, N; Matveev, V; Pashenkov, A; Postoev, V E; Solovey, A; Toropin, A; Troitsky, S; Baud, A; Epshteyn, V; Gavrilov, V; Ilina, N; Kaftanov, V; Kolosov, V; Kossov, M; Krokhotin, A; Kuleshov, S; Oulianov, A; Safronov, G; Semenov, S; Shreyber, I; Stolin, V; Vlasov, E; Zhokin, A; Boos, E; Dubinin, M; Dudko, L; Ershov, A; Gribushin, A; Klyukhin, V; Kodolova, O; Lokhtin, I; Petrushanko, S; Sarycheva, L; Savrin, V; Snigirev, A; Vardanyan, I; Dremin, I; Kirakosyan, M; Konovalova, N; Rusakov, S V; Vinogradov, A; Akimenko, S; Artamonov, A; Azhgirey, I; Bitioukov, S; Burtovoy, V; Grishin, V; Kachanov, V; Konstantinov, D; Krychkine, V; Levine, A; Lobov, I; Lukanin, V; Mel'nik, Y; Petrov, V; Ryutin, R; Slabospitsky, S; Sobol, A; Sytine, A; Tourtchanovitch, L; Troshin, S; Tyurin, N; Uzunian, A; Volkov, A; Adzic, P; Djordjevic, M; Jovanovic, D; Krpic, D; Maletic, D; Puzovic, J; Smiljkovic, N; Aguilar-Benitez, M; Alberdi, J; Alcaraz Maestre, J; Arce, P; Barcala, J M; Battilana, C; Burgos Lazaro, C; Caballero Bejar, J; Calvo, E; Cardenas Montes, M; Cepeda, M; Cerrada, M; Chamizo Llatas, M; Clemente, F; Colino, N; Daniel, M; De La Cruz, B; Delgado Peris, A; Diez Pardos, C; Fernandez Bedoya, C; Fernández Ramos, J P; Ferrando, A; Flix, J; Fouz, M C; Garcia-Abia, P; Garcia-Bonilla, A C; Gonzalez Lopez, O; Goy Lopez, S; Hernandez, J M; Josa, M I; Marin, J; Merino, G; Molina, J; Molinero, A; Navarrete, J J; Oller, J C; Puerta Pelayo, J; Romero, L; Santaolalla, J; Villanueva Munoz, C; Willmott, C; Yuste, C; Albajar, C; Blanco Otano, M; de Trocóniz, J F; Garcia Raboso, A; Lopez Berengueres, J O; Cuevas, J; Fernandez Menendez, J; Gonzalez Caballero, I; Lloret Iglesias, L; Naves Sordo, H; Vizan Garcia, J M; Cabrillo, I J; Calderon, A; Chuang, S H; Diaz Merino, I; Diez Gonzalez, C; Duarte Campderros, J; Fernandez, M; Gomez, G; Gonzalez Sanchez, J; Gonzalez Suarez, R; Jorda, C; Lobelle Pardo, P; Lopez Virto, A; Marco, J; Marco, R; Martinez Rivero, C; Martinez Ruiz del Arbol, P; Matorras, F; Rodrigo, T; Ruiz Jimeno, A; Scodellaro, L; Sobron Sanudo, M; Vila, I; Vilar Cortabitarte, R; Abbaneo, D; Albert, E; Alidra, M; Ashby, S; Auffray, E; Baechler, J; Baillon, P; Ball, A H; Bally, S L; Barney, D; Beaudette, F; Bellan, R; Benedetti, D; Benelli, G; Bernet, C; Bloch, P; Bolognesi, S; Bona, M; Bos, J; Bourgeois, N; Bourrel, T; Breuker, H; Bunkowski, K; Campi, D; Camporesi, T; Cano, E; Cattai, A; Chatelain, J P; Chauvey, M; Christiansen, T; Coarasa Perez, J A; Conde Garcia, A; Covarelli, R; Curé, B; De Roeck, A; Delachenal, V; Deyrail, D; Di Vincenzo, S; Dos Santos, S; Dupont, T; Edera, L M; Elliott-Peisert, A; Eppard, M; Favre, M; Frank, N; Funk, W; Gaddi, A; Gastal, M; Gateau, M; Gerwig, H; Gigi, D; Gill, K; Giordano, D; Girod, J P; Glege, F; Gomez-Reino Garrido, R; Goudard, R; Gowdy, S; Guida, R; Guiducci, L; Gutleber, J; Hansen, M; Hartl, C; Harvey, J; Hegner, B; Hoffmann, H F; Holzner, A; Honma, A; Huhtinen, M; Innocente, V; Janot, P; Le Godec, G; Lecoq, P; Leonidopoulos, C; Loos, R; Lourenço, C; Lyonnet, A; Macpherson, A; Magini, N; Maillefaud, J D; Maire, G; Mäki, T; Malgeri, L; Mannelli, M; Masetti, L; Meijers, F; Meridiani, P; Mersi, S; Meschi, E; Meynet Cordonnier, A; Moser, R; Mulders, M; Mulon, J; Noy, M; Oh, A; Olesen, G; Onnela, A; Orimoto, T; Orsini, L; Perez, E; Perinic, G; Pernot, J F; Petagna, P; Petiot, P; Petrilli, A; Pfeiffer, A; Pierini, M; Pimiä, M; Pintus, R; Pirollet, B; Postema, H; Racz, A; Ravat, S; Rew, S B; Rodrigues Antunes, J; Rolandi, G; Rovere, M; Ryjov, V; Sakulin, H; Samyn, D; Sauce, H; Schäfer, C; Schlatter, W D; Schröder, M; Schwick, C; Sciaba, A; Segoni, I; Sharma, A; Siegrist, N; Siegrist, P; Sinanis, N; Sobrier, T; Sphicas, P; Spiga, D; Spiropulu, M; Stöckli, F; Traczyk, P; Tropea, P; Troska, J; Tsirou, A; Veillet, L; Veres, G I; Voutilainen, M; Wertelaers, P; Zanetti, M; Bertl, W; Deiters, K; Erdmann, W; Gabathuler, K; Horisberger, R; Ingram, Q; Kaestli, H C; König, S; Kotlinski, D; Langenegger, U; Meier, F; Renker, D; Rohe, T; Sibille, J; Starodumov, A; Betev, B; Caminada, L; Chen, Z; Cittolin, S; Da Silva Di Calafiori, D R; Dambach, S; Dissertori, G; Dittmar, M; Eggel, C; Eugster, J; Faber, G; Freudenreich, K; Grab, C; Hervé, A; Hintz, W; Lecomte, P; Luckey, P D; Lustermann, W; Marchica, C; Milenovic, P; Moortgat, F; Nardulli, A; Nessi-Tedaldi, F; Pape, L; Pauss, F; Punz, T; Rizzi, A; Ronga, F J; Sala, L; Sanchez, A K; Sawley, M C; Sordini, V; Stieger, B; Tauscher, L; Thea, A; Theofilatos, K; Treille, D; Trüb, P; Weber, M; Wehrli, L; Weng, J; Zelepoukine, S; Amsler, C; Chiochia, V; De Visscher, S; Regenfus, C; Robmann, P; Rommerskirchen, T; Schmidt, A; Tsirigkas, D; Wilke, L; Chang, Y H; Chen, E A; Chen, W T; Go, A; Kuo, C M; Li, S W; Lin, W; Bartalini, P; Chang, P; Chao, Y; Chen, K F; Hou, W S; Hsiung, Y; Lei, Y J; Lin, S W; Lu, R S; Schümann, J; Shiu, J G; Tzeng, Y M; Ueno, K; Velikzhanin, Y; Wang, C C; Wang, M; Adiguzel, A; Ayhan, A; Azman Gokce, A; Bakirci, M N; Cerci, S; Dumanoglu, I; Eskut, E; Girgis, S; Gurpinar, E; Hos, I; Karaman, T; Kayis Topaksu, A; Kurt, P; Önengüt, G; Önengüt Gökbulut, G; Ozdemir, K; Ozturk, S; Polatöz, A; Sogut, K; Tali, B; Topakli, H; Uzun, D; Vergili, L N; Vergili, M; Akin, I V; Aliev, T; Bilmis, S; Deniz, M; Gamsizkan, H; Guler, A M; Öcalan, K; Serin, M; Sever, R; Surat, U E; Zeyrek, M; Deliomeroglu, M; Demir, D; Gülmez, E; Halu, A; Isildak, B; Kaya, M; Kaya, O; Ozkorucuklu, S; Sonmez, N; Levchuk, L; Lukyanenko, S; Soroka, D; Zub, S; Bostock, F; Brooke, J J; Cheng, T L; Cussans, D; Frazier, R; Goldstein, J; Grant, N; Hansen, M; Heath, G P; Heath, H F; Hill, C; Huckvale, B; Jackson, J; Mackay, C K; Metson, S; Newbold, D M; Nirunpong, K; Smith, V J; Velthuis, J; Walton, R; Bell, K W; Brew, C; Brown, R M; Camanzi, B; Cockerill, D J A; Coughlan, J A; Geddes, N I; Harder, K; Harper, S; Kennedy, B W; Murray, P; Shepherd-Themistocleous, C H; Tomalin, I R; Williams, J H; Womersley, W J; Worm, S D; Bainbridge, R; Ball, G; Ballin, J; Beuselinck, R; Buchmuller, O; Colling, D; Cripps, N; Davies, G; Della Negra, M; Foudas, C; Fulcher, J; Futyan, D; Hall, G; Hays, J; Iles, G; Karapostoli, G; MacEvoy, B C; Magnan, A M; Marrouche, J; Nash, J; Nikitenko, A; Papageorgiou, A; Pesaresi, M; Petridis, K; Pioppi, M; Raymond, D M; Rompotis, N; Rose, A; Ryan, M J; Seez, C; Sharp, P; Sidiropoulos, G; Stettler, M; Stoye, M; Takahashi, M; Tapper, A; Timlin, C; Tourneur, S; Vazquez Acosta, M; Virdee, T; Wakefield, S; Wardrope, D; Whyntie, T; Wingham, M; Cole, J E; Goitom, I; Hobson, P R; Khan, A; Kyberd, P; Leslie, D; Munro, C; Reid, I D; Siamitros, C; Taylor, R; Teodorescu, L; Yaselli, I; Bose, T; Carleton, M; Hazen, E; Heering, A H; Heister, A; John, J St; Lawson, P; Lazic, D; Osborne, D; Rohlf, J; Sulak, L; Wu, S; Andrea, J; Avetisyan, A; Bhattacharya, S; Chou, J P; Cutts, D; Esen, S; Kukartsev, G; Landsberg, G; Narain, M; Nguyen, D; Speer, T; Tsang, K V; Breedon, R; Calderon De La Barca Sanchez, M; Case, M; Cebra, D; Chertok, M; Conway, J; Cox, P T; Dolen, J; Erbacher, R; Friis, E; Ko, W; Kopecky, A; Lander, R; Lister, A; Liu, H; Maruyama, S; Miceli, T; Nikolic, M; Pellett, D; Robles, J; Searle, M; Smith, J; Squires, M; Stilley, J; Tripathi, M; Vasquez Sierra, R; Veelken, C; Andreev, V; Arisaka, K; Cline, D; Cousins, R; Erhan, S; Hauser, J; Ignatenko, M; Jarvis, C; Mumford, J; Plager, C; Rakness, G; Schlein, P; Tucker, J; Valuev, V; Wallny, R; Yang, X; Babb, J; Bose, M; Chandra, A; Clare, R; Ellison, J A; Gary, J W; Hanson, G; Jeng, G Y; Kao, S C; Liu, F; Liu, H; Luthra, A; Nguyen, H; Pasztor, G; Satpathy, A; Shen, B C; Stringer, R; Sturdy, J; Sytnik, V; Wilken, R; Wimpenny, S; Branson, J G; Dusinberre, E; Evans, D; Golf, F; Kelley, R; Lebourgeois, M; Letts, J; Lipeles, E; Mangano, B; Muelmenstaedt, J; Norman, M; Padhi, S; Petrucci, A; Pi, H; Pieri, M; Ranieri, R; Sani, M; Sharma, V; Simon, S; Würthwein, F; Yagil, A; Campagnari, C; D'Alfonso, M; Danielson, T; Garberson, J; Incandela, J; Justus, C; Kalavase, P; Koay, S A; Kovalskyi, D; Krutelyov, V; Lamb, J; Lowette, S; Pavlunin, V; Rebassoo, F; Ribnik, J; Richman, J; Rossin, R; Stuart, D; To, W; Vlimant, J R; Witherell, M; Apresyan, A; Bornheim, A; Bunn, J; Chiorboli, M; Gataullin, M; Kcira, D; Litvine, V; Ma, Y; Newman, H B; Rogan, C; Timciuc, V; Veverka, J; Wilkinson, R; Yang, Y; Zhang, L; Zhu, K; Zhu, R Y; Akgun, B; Carroll, R; Ferguson, T; Jang, D W; Jun, S Y; Paulini, M; Russ, J; Terentyev, N; Vogel, H; Vorobiev, I; Cumalat, J P; Dinardo, M E; Drell, B R; Ford, W T; Heyburn, B; Luiggi Lopez, E; Nauenberg, U; Stenson, K; Ulmer, K; Wagner, S R; Zang, S L; Agostino, L; Alexander, J; Blekman, F; Cassel, D; Chatterjee, A; Das, S; Gibbons, L K; Heltsley, B; Hopkins, W; Khukhunaishvili, A; Kreis, B; Kuznetsov, V; Patterson, J R; Puigh, D; Ryd, A; Shi, X; Stroiney, S; Sun, W; Teo, W D; Thom, J; Vaughan, J; Weng, Y; Wittich, P; Beetz, C P; Cirino, G; Sanzeni, C; Winn, D; Abdullin, S; Afaq, M A; Albrow, M; Ananthan, B; Apollinari, G; Atac, M; Badgett, W; Bagby, L; Bakken, J A; Baldin, B; Banerjee, S; Banicz, K; Bauerdick, L A T; Beretvas, A; Berryhill, J; Bhat, P C; Biery, K; Binkley, M; Bloch, I; Borcherding, F; Brett, A M; Burkett, K; Butler, J N; Chetluru, V; Cheung, H W K; Chlebana, F; Churin, I; Cihangir, S; Crawford, M; Dagenhart, W; Demarteau, M; Derylo, G; Dykstra, D; Eartly, D P; Elias, J E; Elvira, V D; Evans, D; Feng, L; Fischler, M; Fisk, I; Foulkes, S; Freeman, J; Gartung, P; Gottschalk, E; Grassi, T; Green, D; Guo, Y; Gutsche, O; Hahn, A; Hanlon, J; Harris, R M; Holzman, B; Howell, J; Hufnagel, D; James, E; Jensen, H; Johnson, M; Jones, C D; Joshi, U; Juska, E; Kaiser, J; Klima, B; Kossiakov, S; Kousouris, K; Kwan, S; Lei, C M; Limon, P; Lopez Perez, J A; Los, S; Lueking, L; Lukhanin, G; Lusin, S; Lykken, J; Maeshima, K; Marraffino, J M; Mason, D; McBride, P; Miao, T; Mishra, K; Moccia, S; Mommsen, R; Mrenna, S; Muhammad, A S; Newman-Holmes, C; Noeding, C; O'Dell, V; Prokofyev, O; Rivera, R; Rivetta, C H; Ronzhin, A; Rossman, P; Ryu, S; Sekhri, V; Sexton-Kennedy, E; Sfiligoi, I; Sharma, S; Shaw, T M; Shpakov, D; Skup, E; Smith, R P; Soha, A; Spalding, W J; Spiegel, L; Suzuki, I; Tan, P; Tanenbaum, W; Tkaczyk, S; Trentadue, R; Uplegger, L; Vaandering, E W; Vidal, R; Whitmore, J; Wicklund, E; Wu, W; Yarba, J; Yumiceva, F; Yun, J C; Acosta, D; Avery, P; Barashko, V; Bourilkov, D; Chen, M; Di Giovanni, G P; Dobur, D; Drozdetskiy, A; Field, R D; Fu, Y; Furic, I K; Gartner, J; Holmes, D; Kim, B; Klimenko, S; Konigsberg, J; Korytov, A; Kotov, K; Kropivnitskaya, A; Kypreos, T; Madorsky, A; Matchev, K; Mitselmakher, G; Pakhotin, Y; Piedra Gomez, J; Prescott, C; Rapsevicius, V; Remington, R; Schmitt, M; Scurlock, B; Wang, D; Yelton, J; Ceron, C; Gaultney, V; Kramer, L; Lebolo, L M; Linn, S; Markowitz, P; Martinez, G; Rodriguez, J L; Adams, T; Askew, A; Baer, H; Bertoldi, M; Chen, J; Dharmaratna, W G D; Gleyzer, S V; Haas, J; Hagopian, S; Hagopian, V; Jenkins, M; Johnson, K F; Prettner, E; Prosper, H; Sekmen, S; Baarmand, M M; Guragain, S; Hohlmann, M; Kalakhety, H; Mermerkaya, H; Ralich, R; Vodopiyanov, I; Abelev, B; Adams, M R; Anghel, I M; Apanasevich, L; Bazterra, V E; Betts, R R; Callner, J; Castro, M A; Cavanaugh, R; Dragoiu, C; Garcia-Solis, E J; Gerber, C E; Hofman, D J; Khalatian, S; Mironov, C; Shabalina, E; Smoron, A; Varelas, N; Akgun, U; Albayrak, E A; Ayan, A S; Bilki, B; Briggs, R; Cankocak, K; Chung, K; Clarida, W; Debbins, P; Duru, F; Ingram, F D; Lae, C K; McCliment, E; Merlo, J P; Mestvirishvili, A; Miller, M J; Moeller, A; Nachtman, J; Newsom, C R; Norbeck, E; Olson, J; Onel, Y; Ozok, F; Parsons, J; Schmidt, I; Sen, S; Wetzel, J; Yetkin, T; Yi, K; Barnett, B A; Blumenfeld, B; Bonato, A; Chien, C Y; Fehling, D; Giurgiu, G; Gritsan, A V; Guo, Z J; Maksimovic, P; Rappoccio, S; Swartz, M; Tran, N V; Zhang, Y; Baringer, P; Bean, A; Grachov, O; Murray, M; Radicci, V; Sanders, S; Wood, J S; Zhukova, V; Bandurin, D; Bolton, T; Kaadze, K; Liu, A; Maravin, Y; Onoprienko, D; Svintradze, I; Wan, Z; Gronberg, J; Hollar, J; Lange, D; Wright, D; Baden, D; Bard, R; Boutemeur, M; Eno, S C; Ferencek, D; Hadley, N J; Kellogg, R G; Kirn, M; Kunori, S; Rossato, K; Rumerio, P; Santanastasio, F; Skuja, A; Temple, J; Tonjes, M B; Tonwar, S C; Toole, T; Twedt, E; Alver, B; Bauer, G; Bendavid, J; Busza, W; Butz, E; Cali, I A; Chan, M; D'Enterria, D; Everaerts, P; Gomez Ceballos, G; Hahn, K A; Harris, P; Jaditz, S; Kim, Y; Klute, M; Lee, Y J; Li, W; Loizides, C; Ma, T; Miller, M; Nahn, S; Paus, C; Roland, C; Roland, G; Rudolph, M; Stephans, G; Sumorok, K; Sung, K; Vaurynovich, S; Wenger, E A; Wyslouch, B; Xie, S; Yilmaz, Y; Yoon, A S; Bailleux, D; Cooper, S I; Cushman, P; Dahmes, B; De Benedetti, A; Dolgopolov, A; Dudero, P R; Egeland, R; Franzoni, G; Haupt, J; Inyakin, A; Klapoetke, K; Kubota, Y; Mans, J; Mirman, N; Petyt, D; Rekovic, V; Rusack, R; Schroeder, M; Singovsky, A; Zhang, J; Cremaldi, L M; Godang, R; Kroeger, R; Perera, L; Rahmat, R; Sanders, D A; Sonnek, P; Summers, D; Bloom, K; Bockelman, B; Bose, S; Butt, J; Claes, D R; Dominguez, A; Eads, M; Keller, J; Kelly, T; Kravchenko, I; Lazo-Flores, J; Lundstedt, C; Malbouisson, H; Malik, S; Snow, G R; Baur, U; Iashvili, I; Kharchilava, A; Kumar, A; Smith, K; Strang, M; Alverson, G; Barberis, E; Boeriu, O; Eulisse, G; Govi, G; McCauley, T; Musienko, Y; Muzaffar, S; Osborne, I; Paul, T; Reucroft, S; Swain, J; Taylor, L; Tuura, L; Anastassov, A; Gobbi, B; Kubik, A; Ofierzynski, R A; Pozdnyakov, A; Schmitt, M; Stoynev, S; Velasco, M; Won, S; Antonelli, L; Berry, D; Hildreth, M; Jessop, C; Karmgard, D J; Kolberg, T; Lannon, K; Lynch, S; Marinelli, N; Morse, D M; Ruchti, R; Slaunwhite, J; Warchol, J; Wayne, M; Bylsma, B; Durkin, L S; Gilmore, J; Gu, J; Killewald, P; Ling, T Y; Williams, G; Adam, N; Berry, E; Elmer, P; Garmash, A; Gerbaudo, D; Halyo, V; Hunt, A; Jones, J; Laird, E; Marlow, D; Medvedeva, T; Mooney, M; Olsen, J; Piroué, P; Stickland, D; Tully, C; Werner, J S; Wildish, T; Xie, Z; Zuranski, A; Acosta, J G; Bonnett Del Alamo, M; Huang, X T; Lopez, A; Mendez, H; Oliveros, S; Ramirez Vargas, J E; Santacruz, N; Zatzerklyany, A; Alagoz, E; Antillon, E; Barnes, V E; Bolla, G; Bortoletto, D; Everett, A; Garfinkel, A F; Gecse, Z; Gutay, L; Ippolito, N; Jones, M; Koybasi, O; Laasanen, A T; Leonardo, N; Liu, C; Maroussov, V; Merkel, P; Miller, D H; Neumeister, N; Sedov, A; Shipsey, I; Yoo, H D; Zheng, Y; Jindal, P; Parashar, N; Cuplov, V; Ecklund, K M; Geurts, F J M; Liu, J H; Maronde, D; Matveev, M; Padley, B P; Redjimi, R; Roberts, J; Sabbatini, L; Tumanov, A; Betchart, B; Bodek, A; Budd, H; Chung, Y S; de Barbaro, P; Demina, R; Flacher, H; Gotra, Y; Harel, A; Korjenevski, S; Miner, D C; Orbaker, D; Petrillo, G; Vishnevskiy, D; Zielinski, M; Bhatti, A; Demortier, L; Goulianos, K; Hatakeyama, K; Lungu, G; Mesropian, C; Yan, M; Atramentov, O; Bartz, E; Gershtein, Y; Halkiadakis, E; Hits, D; Lath, A; Rose, K; Schnetzer, S; Somalwar, S; Stone, R; Thomas, S; Watts, T L; Cerizza, G; Hollingsworth, M; Spanier, S; Yang, Z C; York, A; Asaadi, J; Aurisano, A; Eusebi, R; Golyash, A; Gurrola, A; Kamon, T; Nguyen, C N; Pivarski, J; Safonov, A; Sengupta, S; Toback, D; Weinberger, M; Akchurin, N; Berntzon, L; Gumus, K; Jeong, C; Kim, H; Lee, S W; Popescu, S; Roh, Y; Sill, A; Volobouev, I; Washington, E; Wigmans, R; Yazgan, E; Engh, D; Florez, C; Johns, W; Pathak, S; Sheldon, P; Andelin, D; Arenton, M W; Balazs, M; Boutle, S; Buehler, M; Conetti, S; Cox, B; Hirosky, R; Ledovskoy, A; Neu, C; Phillips II, D; Ronquest, M; Yohay, R; Gollapinni, S; Gunthoti, K; Harr, R; Karchin, P E; Mattson, M; Sakharov, A; Anderson, M; Bachtis, M; Bellinger, J N; Carlsmith, D; Crotty, I; Dasu, S; Dutta, S; Efron, J; Feyzi, F; Flood, K; Gray, L; Grogg, K S; Grothe, M; Hall-Wilton, R; Jaworski, M; Klabbers, P; Klukas, J; Lanaro, A; Lazaridis, C; Leonard, J; Loveless, R; Magrans de Abril, M; Mohapatra, A; Ott, G; Polese, G; Reeder, D; Savin, A; Smith, W H; Sourkov, A; Swanson, J; Weinberg, M; Wenman, D; Wensveen, M; White, A

    2010-01-01

    The CMS High-Level Trigger (HLT) is responsible for ensuring that data samples with potentially interesting events are recorded with high efficiency and good quality. This paper gives an overview of the HLT and focuses on its commissioning using cosmic rays. The selection of triggers that were deployed is presented and the online grouping of triggered events into streams and primary datasets is discussed. Tools for online and offline data quality monitoring for the HLT are described, and the operational performance of the muon HLT algorithms is reviewed. The average time taken for the HLT selection and its dependence on detector and operating conditions are presented. The HLT performed reliably and helped provide a large dataset. This dataset has proven to be invaluable for understanding the performance of the trigger and the CMS experiment as a whole.

  18. Commissioning of the CMS High-Level Trigger with cosmic rays

    International Nuclear Information System (INIS)

    2010-01-01

    The CMS High-Level Trigger (HLT) is responsible for ensuring that data samples with potentially interesting events are recorded with high efficiency and good quality. This paper gives an overview of the HLT and focuses on its commissioning using cosmic rays. The selection of triggers that were deployed is presented and the online grouping of triggered events into streams and primary datasets is discussed. Tools for online and offline data quality monitoring for the HLT are described, and the operational performance of the muon HLT algorithms is reviewed. The average time taken for the HLT selection and its dependence on detector and operating conditions are presented. The HLT performed reliably and helped provide a large dataset. This dataset has proven to be invaluable for understanding the performance of the trigger and the CMS experiment as a whole.

  19. Global tracker for the ALICE high level trigger

    International Nuclear Information System (INIS)

    Vik, Thomas

    2006-01-01

    This thesis deals with two main topics. The first is the implementation and testing of a Kalman filter algorithm in the HLT (High Level Trigger) reconstruction code. This will perform the global tracking in the HLT, that is merging tracklets and hits from the different sub-detectors in the central barrel detector. The second topic is a trigger mode of the HLT which uses the global tracking of particles through the TRD (Transition Radiation Detector), TPC (Time Projection Chamber) and the ITS (Inner Tracking System): The dielectron trigger. Global tracking: The Kalman filter algorithm has been introduced to the HLT tracking scheme. (Author)

  20. Transmission line transformer for reliable and low-jitter triggering of a railgap switch.

    Science.gov (United States)

    Verma, Rishi; Mishra, Ekansh; Sagar, Karuna; Meena, Manraj; Shyam, Anurag

    2014-09-01

    The performance of railgap switch critically relies upon multichannel breakdown between the extended electrodes (rails) in order to ensure distributed current transfer along electrode length and to minimize the switch inductance. The initiation of several simultaneous arc channels along the switch length depends on the gap triggering technique and on the rate at which the electric field changes within the gap. This paper presents design, construction, and output characteristics of a coaxial cable based three-stage transmission line transformer (TLT) that is capable of initiating multichannel breakdown in a high voltage, low inductance railgap switch. In each stage three identical lengths of URM67 coaxial cables have been used in parallel and they have been wounded in separate cassettes to enhance the isolation of the output of transformer from the input. The cascaded output impedance of TLT is ~50 Ω. Along with multi-channel formation over the complete length of electrode rails, significant reduction in jitter (≤2 ns) and conduction delay (≤60 ns) has been observed by the realization of large amplitude (~80 kV), high dV/dt (~6 kV/ns) pulse produced by the indigenously developed TLT based trigger generator. The superior performance of TLT over conventional pulse transformer for railgap triggering application has been compared and demonstrated experimentally.

  1. Exact reliability quantification of highly reliable systems with maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Bris, Radim, E-mail: radim.bris@vsb.c [VSB-Technical University Ostrava, Faculty of Electrical Engineering and Computer Science, Department of Applied Mathematics, 17. listopadu 15, 70833 Ostrava-Poruba (Czech Republic)

    2010-12-15

    When a system is composed of highly reliable elements, exact reliability quantification may be problematic, because computer accuracy is limited. Inaccuracy can be due to different aspects. For example, an error may be made when subtracting two numbers that are very close to each other, or at the process of summation of many very different numbers, etc. The basic objective of this paper is to find a procedure, which eliminates errors made by PC when calculations close to an error limit are executed. Highly reliable system is represented by the use of directed acyclic graph which is composed from terminal nodes, i.e. highly reliable input elements, internal nodes representing subsystems and edges that bind all of these nodes. Three admissible unavailability models of terminal nodes are introduced, including both corrective and preventive maintenance. The algorithm for exact unavailability calculation of terminal nodes is based on merits of a high-performance language for technical computing MATLAB. System unavailability quantification procedure applied to a graph structure, which considers both independent and dependent (i.e. repeatedly occurring) terminal nodes is based on combinatorial principle. This principle requires summation of a lot of very different non-negative numbers, which may be a source of an inaccuracy. That is why another algorithm for exact summation of such numbers is designed in the paper. The summation procedure uses benefits from a special number system with the base represented by the value 2{sup 32}. Computational efficiency of the new computing methodology is compared with advanced simulation software. Various calculations on systems from references are performed to emphasize merits of the methodology.

  2. Dedicated Trigger for Highly Ionising Particles at ATLAS

    CERN Document Server

    Katre, Akshay; The ATLAS collaboration

    2015-01-01

    In 2012, a novel strategy was designed to detect signatures of Highly Ionising Particles (HIPs) such as magnetic monopoles, dyons or Qballs with the ATLAS trigger system. With proton-proton collisions at a centre of mass enegy of 8 TeV, the trigger was designed to have unique properties as a tracker for HIPs. It uses only the Transition Radiation Tracker (TRT) system, applying an algorithm distinct from standard tracking ones. The unique high threshold readout capability of the TRT is used at the location where HIPs in the detector are looked for. In particular the number and the fraction of TRT high threshold hits is used to distinguish HIPs from background processes. The trigger requires significantly lower energy depositions in the electro-magnetic calorimeters as a seed unlike previously used trigger algorithms for such searches. Thus the new trigger is capable of probing a large range of HIP masses and charges. We will give a description of the algorithms for this newly developed trigger for HIP searches...

  3. The ATLAS trigger high-level trigger commissioning and operation during early data taking

    CERN Document Server

    Goncalo, R

    2008-01-01

    The ATLAS experiment is one of the two general-purpose experiments due to start operation soon at the Large Hadron Collider (LHC). The LHC will collide protons at a centre of mass energy of 14~TeV, with a bunch-crossing rate of 40~MHz. The ATLAS three-level trigger will reduce this input rate to match the foreseen offline storage capability of 100-200~Hz. After the Level 1 trigger, which is implemented in custom hardware, the High-Level Trigger (HLT) further reduces the rate from up to 100~kHz to the offline storage rate while retaining the most interesting physics. The HLT is implemented in software running in commercially available computer farms and consists of Level 2 and Event Filter. To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection. Data produced during LHC commissioning will be vital for calibrating and aligning sub-detectors, as well as for testing the ATLAS trigger and setting up t...

  4. The ATLAS High Level Trigger Steering Framework and the Trigger Configuration System.

    CERN Document Server

    Perez Cavalcanti, Tiago; The ATLAS collaboration

    2011-01-01

    The ATLAS detector system installed in the Large Hadron Collider (LHC) at CERN is designed to study proton-proton and nucleus-nucleus collisions with a maximum centre of mass energy of 14 TeV at a bunch collision rate of 40MHz. In March 2010 the four LHC experiments saw the first proton-proton collisions at 7 TeV. Still within the year a collision rate of nearly 10 MHz is expected. At ATLAS, events of potential interest for ATLAS physics are selected by a three-level trigger system, with a final recording rate of about 200 Hz. The first level (L1) is implemented in custom hardware; the two levels of the high level trigger (HLT) are software triggers, running on large farms of standard computers and network devices. Within the ATLAS physics program more than 500 trigger signatures are defined. The HLT tests each signature on each L1-accepted event; the test outcome is recorded for later analysis. The HLT-Steering is responsible for this. It foremost ensures the independent test of each signature, guarantying u...

  5. High-level trigger system for the LHC ALICE experiment

    CERN Document Server

    Bramm, R; Lien, J A; Lindenstruth, V; Loizides, C; Röhrich, D; Skaali, B; Steinbeck, T M; Stock, Reinhard; Ullaland, K; Vestbø, A S; Wiebalck, A

    2003-01-01

    The central detectors of the ALICE experiment at LHC will produce a data size of up to 75 MB/event at an event rate less than approximately equals 200 Hz resulting in a data rate of similar to 15 GB/s. Online processing of the data is necessary in order to select interesting (sub)events ("High Level Trigger"), or to compress data efficiently by modeling techniques. Processing this data requires a massive parallel computing system (High Level Trigger System). The system will consist of a farm of clustered SMP-nodes based on off- the-shelf PCs connected with a high bandwidth low latency network.

  6. Contribution to high voltage matrix switches reliability

    International Nuclear Information System (INIS)

    Lausenaz, Yvan

    2000-01-01

    Nowadays, power electronic equipment requirements are important, concerning performances, quality and reliability. On the other hand, costs have to be reduced in order to satisfy the market rules. To provide cheap, reliability and performances, many standard components with mass production are developed. But the construction of specific products must be considered following these two different points: in one band you can produce specific components, with delay, over-cost problems and eventuality quality and reliability problems, in the other and you can use standard components in a adapted topologies. The CEA of Pierrelatte has adopted this last technique of power electronic conception for the development of these high voltage pulsed power converters. The technique consists in using standard components and to associate them in series and in parallel. The matrix constitutes high voltage macro-switch where electrical parameters are distributed between the synchronized components. This study deals with the reliability of these structures. It brings up the high reliability aspect of MOSFETs matrix associations. Thanks to several homemade test facilities, we obtained lots of data concerning the components we use. The understanding of defects propagation mechanisms in matrix structures has allowed us to put forwards the necessity of robust drive system, adapted clamping voltage protection, and careful geometrical construction. All these reliability considerations in matrix associations have notably allowed the construction of a new matrix structure regrouping all solutions insuring reliability. Reliable and robust, this product has already reaches the industrial stage. (author) [fr

  7. Development of a highly reliable CRT processor

    International Nuclear Information System (INIS)

    Shimizu, Tomoya; Saiki, Akira; Hirai, Kenji; Jota, Masayoshi; Fujii, Mikiya

    1996-01-01

    Although CRT processors have been employed by the main control board to reduce the operator's workload during monitoring, the control systems are still operated by hardware switches. For further advancement, direct controller operation through a display device is expected. A CRT processor providing direct controller operation must be as reliable as the hardware switches are. The authors are developing a new type of highly reliable CRT processor that enables direct controller operations. In this paper, we discuss the design principles behind a highly reliable CRT processor. The principles are defined by studies of software reliability and of the functional reliability of the monitoring and operation systems. The functional configuration of an advanced CRT processor is also addressed. (author)

  8. Dedicated Trigger for Highly Ionising Particles at ATLAS

    CERN Document Server

    Katre, Akshay; The ATLAS collaboration

    2015-01-01

    In 2012, a novel strategy was designed to detect signatures of Highly Ionising Particles (HIPs) such as magnetic monopoles, dyons or Q-balls with ATLAS. A dedicated trigger was developed and deployed for proton-proton collisions at a centre of mass energy of 8 TeV. It uses the Transition Radiation Tracker (TRT) system, applying an algorithm distinct from standard tracking ones. The high threshold (HT) readout capability of the TRT is used to distinguish HIPs from other background processes. The trigger requires significantly lower energy depositions in the electromagnetic calorimeters and is thereby capable of probing a larger range of HIP masses and charges. A description of the algorithm for this newly developed trigger is presented, along with a comparitive study of its performance during the 2012 data-taking period with respect to previous efforts.

  9. The ATLAS high level trigger region of interest builder

    International Nuclear Information System (INIS)

    Blair, R.; Dawson, J.; Drake, G.; Haberichter, W.; Schlereth, J.; Zhang, J.; Ermoline, Y.; Pope, B.; Aboline, M.; High Energy Physics; Michigan State Univ.

    2008-01-01

    This article describes the design, testing and production of the ATLAS Region of Interest Builder (RoIB). This device acts as an interface between the Level 1 trigger and the high level trigger (HLT) farm for the ATLAS LHC detector. It distributes all of the Level 1 data for a subset of events to a small number of (16 or less) individual commodity processors. These processors in turn provide this information to the HLT. This allows the HLT to use the Level 1 information to narrow data requests to areas of the detector where Level 1 has identified interesting objects

  10. Supervision of the ATLAS High Level Trigger System

    CERN Document Server

    Wheeler, S.; Meessen, C.; Qian, Z.; Touchard, F.; Negri, France A.; Zobernig, H.; CHEP 2003 Computing in High Energy Physics; Negri, France A.

    2003-01-01

    The ATLAS High Level Trigger (HLT) system provides software-based event selection after the initial LVL1 hardware trigger. It is composed of two stages, the LVL2 trigger and the Event Filter. The HLT is implemented as software tasks running on large processor farms. An essential part of the HLT is the supervision system, which is responsible for configuring, coordinating, controlling and monitoring the many hundreds of processes running in the HLT. A prototype implementation of the supervision system, using tools from the ATLAS Online Software system is presented. Results from scalability tests are also presented where the supervision system was shown to be capable of controlling over 1000 HLT processes running on 230 nodes.

  11. New high-energy phenomena in aircraft triggered lightning

    NARCIS (Netherlands)

    van Deursen, A.P.J.; Kochkin, P.; de Boer, A.; Bardet, M.; Boissin, J.F.

    2016-01-01

    High-energy phenomena associated with lighting have been proposed in the twenties, observed for the first time in the sixties, and further investigated more recently by e.g. rocket triggered lightning. Similarly, x-rays have been detected in meter-long discharges in air at standard atmospheric

  12. The ATLAS Data Acquisition and High Level Trigger system

    International Nuclear Information System (INIS)

    2016-01-01

    This paper describes the data acquisition and high level trigger system of the ATLAS experiment at the Large Hadron Collider at CERN, as deployed during Run 1. Data flow as well as control, configuration and monitoring aspects are addressed. An overview of the functionality of the system and of its performance is presented and design choices are discussed.

  13. Data analysis at the CMS level-1 trigger: migrating complex selection algorithms from offline analysis and high-level trigger to the trigger electronics

    CERN Document Server

    Wulz, Claudia

    2017-01-01

    With ever increasing luminosity at the LHC, optimum online data selection is becoming more and more important. While in the case of some experiments (LHCb and ALICE) this task is being completely transferred to computer farms, the others -- ATLAS and CMS -- will not be able to do this in the medium-term future for technological, detector-related reasons. Therefore, these experiments pursue the complementary approach of migrating more and more of the offline and high-level trigger intelligence into the trigger electronics. The presentation illustrates how the level-1 trigger of the CMS experiment and in particular its concluding stage, the so-called ``Global Trigger", take up this challenge.

  14. The trigger supervisor: Managing triggering conditions in a high energy physics experiment

    International Nuclear Information System (INIS)

    Wadsworth, B.; Lanza, R.; LeVine, M.J.; Scheetz, R.A.; Videbaek, F.

    1987-01-01

    A trigger supervisor, implemented in VME-bus hardware, is described, which enables the host computer to dynamically control and monitor the trigger configuration for acquiring data from multiple detector partitions in a complex experiment

  15. Multi-threading in the ATLAS High-Level Trigger

    CERN Document Server

    Barton, Adam Edward; The ATLAS collaboration

    2018-01-01

    Over the next decade of LHC data-taking the instantaneous luminosity will reach up 7.5 times the design value with over 200 interactions per bunch-crossing and will pose unprecedented challenges for the ATLAS trigger system. With the evolution of the CPU market to many-core systems, both the ATLAS offline reconstruction and High-Level Trigger (HLT) software will have to transition from a multi-process to a multithreaded processing paradigm in order not to exhaust the available physical memory of a typical compute node. The new multithreaded ATLAS software framework, AthenaMT, has been designed from the ground up to support both the offline and online use-cases with the aim to further harmonize the offline and trigger algorithms. The latter is crucial both in terms of maintenance effort and to guarantee the high trigger efficiency and rejection factors needed for the next two decades of data-taking. We report on an HLT prototype in which the need for HLT­specific components has been reduced to a minimum while...

  16. ATLAS Trigger and Data Acquisition Upgrades for High Luminosity LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00439268; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment at CERN is planning a second phase of upgrades to prepare for the "High Luminosity LHC", a 4th major run due to start in 2026. In order to deliver an order of magnitude more data than previous runs, 14 TeV protons will collide with an instantaneous luminosity of 7.5 × 1034 cm−2s−1, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this extreme scenario is essential to realise the physics programme, it is a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. Initial upgrade designs for the trigger and data acquisition system are shown, including the real time low latency hardware trigger, hardware-based tracking, the high throughput data acquisition system and the commodity hardware and software-based data handling and event filtering. The motivation, overall architecture and expected ...

  17. ATLAS Trigger and Data Acquisition Upgrades for High Luminosity LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00421104; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment at CERN is planning a second phase of upgrades to prepare for the "High Luminosity LHC", a 4th major run due to start in 2026. In order to deliver an order of magnitude more data than previous runs, 14 TeV protons will collide with an instantaneous luminosity of $7.5 \\times 10^{34} cm^{-2}s^{-1}$, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this extreme scenario is essential to realise the physics programme, it is a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. Initial upgrade designs for the trigger and data acquisition system are shown, including the real time low latency hardware trigger, hardware-based tracking, the high throughput data acquisition system and the commodity hardware and software-based data handling and event filtering. The motivation, overall architecture an...

  18. ATLAS Trigger and Data Acquisition Upgrades for High Luminosity LHC

    CERN Document Server

    George, Simon; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment at CERN is planning a second phase of upgrades to prepare for the "High Luminosity LHC", a 4th major run due to start in 2026. In order to deliver an order of magnitude more data than previous runs, 14 TeV protons will collide with an instantaneous luminosity of 7.5 × 10^{34} cm^{−2}s^{−1}, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this extreme scenario is essential to realise the physics programme, it is a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. Initial upgrade designs for the trigger and data acquisition system are shown, including the real time low latency hardware trigger, hardware-based tracking, the high throughput data acquisition system and the commodity hardware and software-based data handling and event filtering. The motivation, overall architecture and ...

  19. ATLAS Trigger and Data Acquisition Upgrades for High Luminosity LHC

    CERN Document Server

    Balunas, William Keaton; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment at CERN is planning a second phase of upgrades to prepare for the "High Luminosity LHC", a 4th major run due to start in 2026. In order to deliver an order of magnitude more data than previous runs, 14 TeV protons will collide with an instantaneous luminosity of $7.5 × 10^{34}$ cm$^{−2}$s$^{−1}$, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this extreme scenario is essential to realise the physics programme, it is a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. Initial upgrade designs for the trigger and data acquisition system are shown, including the real time low latency hardware trigger, hardware-based tracking, the high throughput data acquisition system and the commodity hardware and software-based data handling and event filtering. The motivation, overall architectur...

  20. Delivering high performance BWR fuel reliably

    International Nuclear Information System (INIS)

    Schardt, J.F.

    1998-01-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  1. FPGA based compute nodes for high level triggering in PANDA

    International Nuclear Information System (INIS)

    Kuehn, W; Gilardi, C; Kirschner, D; Lang, J; Lange, S; Liu, M; Perez, T; Yang, S; Schmitt, L; Jin, D; Li, L; Liu, Z; Lu, Y; Wang, Q; Wei, S; Xu, H; Zhao, D; Korcyl, K; Otwinowski, J T; Salabura, P

    2008-01-01

    PANDA is a new universal detector for antiproton physics at the HESR facility at FAIR/GSI. The PANDA data acquisition system has to handle interaction rates of the order of 10 7 /s and data rates of several 100 Gb/s. FPGA based compute nodes with multi-Gb/s bandwidth capability using the ATCA architecture are designed to handle tasks such as event building, feature extraction and high level trigger processing. Data connectivity is provided via optical links as well as multiple Gb Ethernet ports. The boards will support trigger algorithms such us pattern recognition for RICH detectors, EM shower analysis, fast tracking algorithms and global event characterization. Besides VHDL, high level C-like hardware description languages will be considered to implement the firmware

  2. High voltage switch triggered by a laser-photocathode subsystem

    Science.gov (United States)

    Chen, Ping; Lundquist, Martin L.; Yu, David U. L.

    2013-01-08

    A spark gap switch for controlling the output of a high voltage pulse from a high voltage source, for example, a capacitor bank or a pulse forming network, to an external load such as a high gradient electron gun, laser, pulsed power accelerator or wide band radar. The combination of a UV laser and a high vacuum quartz cell, in which a photocathode and an anode are installed, is utilized as triggering devices to switch the spark gap from a non-conducting state to a conducting state with low delay and low jitter.

  3. Using the CMS high level trigger as a cloud resource

    International Nuclear Information System (INIS)

    Colling, David; Huffman, Adam; Bauer, Daniela; McCrae, Alison; Cinquilli, Mattia; Gowdy, Stephen; Coarasa, Jose Antonio; Ozga, Wojciech; Chaze, Olivier; Lahiff, Andrew; Grandi, Claudio; Tiradani, Anthony; Sgaravatto, Massimo

    2014-01-01

    The CMS High Level Trigger is a compute farm of more than 10,000 cores. During data taking this resource is heavily used and is an integral part of the experiment's triggering system. However, outside of data taking periods this resource is largely unused. We describe why CMS wants to use the HLT as a cloud resource (outside of data taking periods) and how this has been achieved. In doing this we have turned a single-use cluster into an agile resource for CMS production computing. While we are able to use the HLT as a production cloud resource, there is still considerable further work that CMS needs to carry out before this resource can be used with the desired agility. This report, therefore, represents a snapshot of this activity at the time of CHEP 2013.

  4. The ALICE High Level Trigger: status and plans

    CERN Document Server

    Krzewicki, Mikolaj; Gorbunov, Sergey; Breitner, Timo; Lehrbach, Johannes; Lindenstruth, Volker; Berzano, Dario

    2015-01-01

    The ALICE High Level Trigger (HLT) is an online reconstruction, triggering and data compression system used in the ALICE experiment at CERN. Unique among the LHC experiments, it extensively uses modern coprocessor technologies like general purpose graphic processing units (GPGPU) and field programmable gate arrays (FPGA) in the data flow. Realtime data compression is performed using a cluster finder algorithm implemented on FPGA boards. These data, instead of raw clusters, are used in the subsequent processing and storage, resulting in a compression factor of around 4. Track finding is performed using a cellular automaton and a Kalman filter algorithm on GPGPU hardware, where both CUDA and OpenCL technologies can be used interchangeably. The ALICE upgrade requires further development of online concepts to include detector calibration and stronger data compression. The current HLT farm will be used as a test bed for online calibration and both synchronous and asynchronous processing frameworks already before t...

  5. Delivering high performance BWR fuel reliably

    Energy Technology Data Exchange (ETDEWEB)

    Schardt, J.F. [GE Nuclear Energy, Wilmington, NC (United States)

    1998-07-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  6. Validity and Reliability of Clinical Examination in the Diagnosis of Myofascial Pain Syndrome and Myofascial Trigger Points in Upper Quarter Muscles.

    Science.gov (United States)

    Mayoral Del Moral, Orlando; Torres Lacomba, María; Russell, I Jon; Sánchez Méndez, Óscar; Sánchez Sánchez, Beatriz

    2017-12-15

    To determine whether two independent examiners can agree on a diagnosis of myofascial pain syndrome (MPS). To evaluate interexaminer reliability in identifying myofascial trigger points in upper quarter muscles. To evaluate the reliability of clinical diagnostic criteria for the diagnosis of MPS. To evaluate the validity of clinical diagnostic criteria for the diagnosis of MPS. Validity and reliability study. Provincial Hospital. Toledo, Spain. Twenty myofascial pain syndrome patients and 20 healthy, normal control subjects, enrolled by a trained and experienced examiner. Ten bilateral muscles from the upper quarter were evaluated by two experienced examiners. The second examiner was blinded to the diagnosis group. The MPS diagnosis required at least one muscle to have an active myofascial trigger point. Three to four days separated the two examinations. The primary outcome measure was the frequency with which the two examiners agreed on the classification of the subjects as patients or as healthy controls. The kappa statistic (K) was used to determine the level of agreement between both examinations, interpreted as very good (0.81-1.00), good (0.61-0.80), moderate (0.41-0.60), fair (0.21-0.40), or poor (≤0.20). Interexaminer reliability for identifying subjects with MPS was very good (K = 1.0). Interexaminer reliability for identifying muscles leading to a diagnosis of MPS was also very good (K = 0.81). Sensitivity and specificity showed high values for most examination tests in all muscles, which confirms the validity of clinical diagnostic criteria in the diagnosis of MPS. Interrater reliability between two expert examiners identifying subjects with MPS involving upper quarter muscles exhibited substantial agreement. These results suggest that clinical criteria can be valid and reliable in the diagnosis of this condition. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  7. High level trigger system for the ALICE experiment

    International Nuclear Information System (INIS)

    Frankenfeld, U.; Roehrich, D.; Ullaland, K.; Vestabo, A.; Helstrup, H.; Lien, J.; Lindenstruth, V.; Schulz, M.; Steinbeck, T.; Wiebalck, A.; Skaali, B.

    2001-01-01

    The ALICE experiment at the Large Hadron Collider (LHC) at CERN will detect up to 20,000 particles in a single Pb-Pb event resulting in a data rate of ∼75 MByte/event. The event rate is limited by the bandwidth of the data storage system. Higher rates are possible by selecting interesting events and subevents (High Level trigger) or compressing the data efficiently with modeling techniques. Both require a fast parallel pattern recognition. One possible solution to process the detector data at such rates is a farm of clustered SMP nodes, based on off-the-shelf PCs, and connected by a high bandwidth, low latency network

  8. The CMS High Level Trigger System: Experience and Future Development

    CERN Document Server

    Bauer, Gerry; Bowen, Matthew; Branson, James G; Bukowiec, Sebastian; Cittolin, Sergio; Coarasa, J A; Deldicque, Christian; Dobson, Marc; Dupont, Aymeric; Erhan, Samim; Flossdorf, Alexander; Gigi, Dominique; Glege, Frank; Gomez-Reino, R; Hartl, Christian; Hegeman, Jeroen; Holzner, André; Y L Hwong; Masetti, Lorenzo; Meijers, Frans; Meschi, Emilio; Mommsen, R K; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph; Petrucci, Andrea; Pieri, Marco; Polese, Giovanni; Racz, Attila; Raginel, Olivier; Sakulin, Hannes; Sani, Matteo; Schwick, Christoph; Shpakov, Dennis; Simon, M; Spataru, A C; Sumorok, Konstanty

    2012-01-01

    The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.

  9. The Software Architecture of the LHCb High Level Trigger

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The LHCb experiment is a spectrometer dedicated to the study of heavy flavor at the LHC. The rate of proton-proton collisions at the LHC is 15 MHz, but disk space limitations mean that only 3 kHz can be written to tape for offline processing. For this reason the LHCb data acquisition system -- trigger -- plays a key role in selecting signal events and rejecting background. In contrast to previous experiments at hadron colliders like for example CDF or D0, the bulk of the LHCb trigger is implemented in software and deployed on a farm of 20k parallel processing nodes. This system, called the High Level Trigger (HLT) is responsible for reducing the rate from the maximum at which the detector can be read out, 1.1 MHz, to the 3 kHz which can be processed offline,and has 20 ms in which to process and accept/reject each event. In order to minimize systematic uncertainties, the HLT was designed from the outset to reuse the offline reconstruction and selection code, and is based around multiple independent and redunda...

  10. A readout buffer prototype for ATLAS high-level triggers

    CERN Document Server

    Calvet, D; Huet, M; Le Dû, P; Mandjavidze, I D; Mur, M

    2001-01-01

    Readout buffers are critical components in the dataflow chain of the ATLAS trigger/data-acquisition system. At up to 75 kHz, after each Level-1 trigger accept signal, these devices receive and store digitized data from groups of front-end electronic channels. Several readout buffers are grouped to form a readout buffer complex that acts as a data server for the high-level trigger selection algorithms and for the final data-collection system. This paper describes a functional prototype of a readout buffer based on a custom-made PCI mezzanine card that is designed to accept input data at up to 160 MB /s, to store up to 8 MB of data, and to distribute data chunks at the desired request rate. We describe the hardware of the card that is based on an Intel 1960 processor and complex programmable logic devices. We present the integration of several of these cards in a readout buffer complex. We measure various performance figures and discuss to which extent these can fulfil ATLAS needs. (5 refs).

  11. Column Grid Array Rework for High Reliability

    Science.gov (United States)

    Mehta, Atul C.; Bodie, Charles C.

    2008-01-01

    Due to requirements for reduced size and weight, use of grid array packages in space applications has become common place. To meet the requirement of high reliability and high number of I/Os, ceramic column grid array packages (CCGA) were selected for major electronic components used in next MARS Rover mission (specifically high density Field Programmable Gate Arrays). ABSTRACT The probability of removal and replacement of these devices on the actual flight printed wiring board assemblies is deemed to be very high because of last minute discoveries in final test which will dictate changes in the firmware. The questions and challenges presented to the manufacturing organizations engaged in the production of high reliability electronic assemblies are, Is the reliability of the PWBA adversely affected by rework (removal and replacement) of the CGA package? and How many times can we rework the same board without destroying a pad or degrading the lifetime of the assembly? To answer these questions, the most complex printed wiring board assembly used by the project was chosen to be used as the test vehicle, the PWB was modified to provide a daisy chain pattern, and a number of bare PWB s were acquired to this modified design. Non-functional 624 pin CGA packages with internal daisy chained matching the pattern on the PWB were procured. The combination of the modified PWB and the daisy chained packages enables continuity measurements of every soldered contact during subsequent testing and thermal cycling. Several test vehicles boards were assembled, reworked and then thermal cycled to assess the reliability of the solder joints and board material including pads and traces near the CGA. The details of rework process and results of thermal cycling are presented in this paper.

  12. High pressure, high current, low inductance, high reliability sealed terminals

    Science.gov (United States)

    Hsu, John S [Oak Ridge, TN; McKeever, John W [Oak Ridge, TN

    2010-03-23

    The invention is a terminal assembly having a casing with at least one delivery tapered-cone conductor and at least one return tapered-cone conductor routed there-through. The delivery and return tapered-cone conductors are electrically isolated from each other and positioned in the annuluses of ordered concentric cones at an off-normal angle. The tapered cone conductor service can be AC phase conductors and DC link conductors. The center core has at least one service conduit of gate signal leads, diagnostic signal wires, and refrigerant tubing routed there-through. A seal material is in direct contact with the casing inner surface, the tapered-cone conductors, and the service conduits thereby hermetically filling the interstitial space in the casing interior core and center core. The assembly provides simultaneous high-current, high-pressure, low-inductance, and high-reliability service.

  13. Resource utilization by the ATLAS High Level Trigger during 2010 and 2011 LHC running

    CERN Document Server

    Ospanov, R

    2012-01-01

    In 2010 and 2011, the ATLAS experiment successfully recorded data from LHC collisions with high efficiency and excellent data quality. ATLAS employs a three-level trigger system to select events of interest for physics analyses and detector commissioning. The trigger system consists of a custom-designed hardware trigger at level-1 and software algorithms at the two higher levels. The trigger selection is defined by a trigger menu which consists of more than 300 individual trigger signatures, such as electrons, muons, particle jets, etc. An execution of a trigger signature incurs computing and data storage costs. Th composition of the deployed trigger menu depends on the instantaneous LHC luminosity, the experiment's goals for the recorded data, and the limits imposed by the available computing power, network bandwidth and storage space. This paper describes a trigger monitoring framework for assigning computing costs for individual trigger signatures and trigger menus as a whole. These costs can be extrapolat...

  14. FPGA Co-processor for the ALICE High Level Trigger

    CERN Document Server

    Grastveit, G.; Lindenstruth, V.; Loizides, C.; Roehrich, D.; Skaali, B.; Steinbeck, T.; Stock, R.; Tilsner, H.; Ullaland, K.; Vestbo, A.; Vik, T.

    2003-01-01

    The High Level Trigger (HLT) of the ALICE experiment requires massive parallel computing. One of the main tasks of the HLT system is two-dimensional cluster finding on raw data of the Time Projection Chamber (TPC), which is the main data source of ALICE. To reduce the number of computing nodes needed in the HLT farm, FPGAs, which are an intrinsic part of the system, will be utilized for this task. VHDL code implementing the Fast Cluster Finder algorithm, has been written, a testbed for functional verification of the code has been developed, and the code has been synthesized

  15. Highly reliable electro-hydraulic control system

    International Nuclear Information System (INIS)

    Mande, Morima; Hiyama, Hiroshi; Takahashi, Makoto

    1984-01-01

    The unscheduled shutdown of nuclear power stations disturbs power system, and exerts large influence on power generation cost due to the lowering of capacity ratio; therefore, high reliability is required for the control system of nuclear power stations. Toshiba Corp. has exerted effort to improve the reliability of the control system of power stations, and in this report, the electro-hydraulic control system for the turbines of nuclear power stations is described. The main functions of the electro-hydraulic control system are the control of main steam pressure with steam regulation valves and turbine bypass valves, the control of turbine speed and load, the prevention of turbine overspeed, the protection of turbines and so on. The system is composed of pressure sensors and a speed sensor, the control board containing the electronic circuits for control computation and protective sequence, the oil cylinders, servo valves and opening detectors of the valves for control, a high pressure oil hydraulic machine and piping, the operating panel and so on. The main features are the adoption of tripling intermediate value selection method, the multiplying of protection sensors and the adoption of 2 out of 3 trip logic, the multiplying of power sources, the improvement of the reliability of electronic circuit hardware and oil hydraulic system. (Kako, I.)

  16. Energy/Reliability Trade-offs in Fault-Tolerant Event-Triggered Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Gan, Junhe; Gruian, Flavius; Pop, Paul

    2011-01-01

    task, such that transient faults are tolerated, the timing constraints of the application are satisfied, and the energy consumed is minimized. Tasks are scheduled using fixed-priority preemptive scheduling, while replication is used for recovery from multiple transient faults. Addressing energy...... and reliability simultaneously is especially challenging, since lowering the voltage to reduce the energy consumption has been shown to increase the transient fault rate. We presented a Tabu Search-based approach which uses an energy/reliability trade-off model to find reliable and schedulable implementations...

  17. Studies of ATM for ATLAS high-level triggers

    CERN Document Server

    Bystrický, J; Huet, M; Le Dû, P; Mandjavidze, I D

    2001-01-01

    This paper presents some of the conclusions of our studies on asynchronous transfer mode (ATM) and fast Ethernet in the ATLAS level-2 trigger pilot project. We describe the general concept and principles of our data-collection and event-building scheme that could be transposed to various experiments in high-energy and nuclear physics. To validate the approach in view of ATLAS high-level triggers, we assembled a testbed composed of up to 48 computers linked by a 7.5-Gbit/s ATM switch. This modular switch is used as a single entity or is split into several smaller interconnected switches. This allows study of how to construct a large network from smaller units. Alternatively, the ATM network can be replaced by fast Ethernet. We detail the operation of the system and present series of performance measurements made with event-building traffic pattern. We extrapolate these results to show how today's commercial networking components could be used to build a 1000-port network adequate for ATLAS needs. Lastly, we li...

  18. Very low pressure high power impulse triggered magnetron sputtering

    Science.gov (United States)

    Anders, Andre; Andersson, Joakim

    2013-10-29

    A method and apparatus are described for very low pressure high powered magnetron sputtering of a coating onto a substrate. By the method of this invention, both substrate and coating target material are placed into an evacuable chamber, and the chamber pumped to vacuum. Thereafter a series of high impulse voltage pulses are applied to the target. Nearly simultaneously with each pulse, in one embodiment, a small cathodic arc source of the same material as the target is pulsed, triggering a plasma plume proximate to the surface of the target to thereby initiate the magnetron sputtering process. In another embodiment the plasma plume is generated using a pulsed laser aimed to strike an ablation target material positioned near the magnetron target surface.

  19. The ATLAS High Level Trigger Infrastructure, Performance and Future Developments

    CERN Document Server

    The ATLAS collaboration

    2009-01-01

    The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage event filter running on a farm of commodity PC hardware. Currently the system consists of about 850 multi-core processing nodes that will be extended incrementally following the increasing luminosity of the LHC to about 2000 nodes depending on the evolution of the processor technology. Due to the complexity and similarity of the algorithms a large fraction of the software is shared between the online and offline event reconstruction. The HLT Infrastructure serves as the interface between the two domains and provides common services for the trigger algorithms. The consequences of this design choice will be discussed and experiences from the operation of the ATLAS HLT during cosmic ray data taking and first beam in 2008 will be presented. Since the event processing time at the HL...

  20. Operational experience with the ALICE High Level Trigger

    Science.gov (United States)

    Szostak, Artur

    2012-12-01

    The ALICE HLT is a dedicated real-time system for online event reconstruction and triggering. Its main goal is to reduce the raw data volume read from the detectors by an order of magnitude, to fit within the available data acquisition bandwidth. This is accomplished by a combination of data compression and triggering. When HLT is enabled, data is recorded only for events selected by HLT. The combination of both approaches allows for flexible data reduction strategies. Event reconstruction places a high computational load on HLT. Thus, a large dedicated computing cluster is required, comprising 248 machines, all interconnected with InfiniBand. Running a large system like HLT in production mode proves to be a challenge. During the 2010 pp and Pb-Pb data-taking period, many problems were experienced that led to a sub-optimal operational efficiency. Lessons were learned and certain crucial changes were made to the architecture and software in preparation for the 2011 Pb-Pb run, in which HLT had a vital role performing data compression for ALICE's largest detector, the TPC. An overview of the status of the HLT and experience from the 2010/2011 production runs are presented. Emphasis is given to the overall performance, showing an improved efficiency and stability in 2011 compared to 2010, attributed to the significant improvements made to the system. Further opportunities for improvement are identified and discussed.

  1. Operational experience with the ALICE High Level Trigger

    International Nuclear Information System (INIS)

    Szostak, Artur

    2012-01-01

    The ALICE HLT is a dedicated real-time system for online event reconstruction and triggering. Its main goal is to reduce the raw data volume read from the detectors by an order of magnitude, to fit within the available data acquisition bandwidth. This is accomplished by a combination of data compression and triggering. When HLT is enabled, data is recorded only for events selected by HLT. The combination of both approaches allows for flexible data reduction strategies. Event reconstruction places a high computational load on HLT. Thus, a large dedicated computing cluster is required, comprising 248 machines, all interconnected with InfiniBand. Running a large system like HLT in production mode proves to be a challenge. During the 2010 pp and Pb-Pb data-taking period, many problems were experienced that led to a sub-optimal operational efficiency. Lessons were learned and certain crucial changes were made to the architecture and software in preparation for the 2011 Pb-Pb run, in which HLT had a vital role performing data compression for ALICE's largest detector, the TPC. An overview of the status of the HLT and experience from the 2010/2011 production runs are presented. Emphasis is given to the overall performance, showing an improved efficiency and stability in 2011 compared to 2010, attributed to the significant improvements made to the system. Further opportunities for improvement are identified and discussed.

  2. Test-Retest Reliability of an Experienced Global Trigger Tool Review Team

    DEFF Research Database (Denmark)

    Bjørn, Brian; Anhøj, Jacob; Østergaard, Mette

    2018-01-01

    and review 2 and between period 1 and period 2. The increase was solely in category E, minor temporary harm. CONCLUSIONS: The very experienced GTT team could not reproduce harm rates found in earlier reviews. We conclude that GTT in its present form is not a reliable measure of harm rate over time....

  3. A novel high reliability CMOS SRAM cell

    Energy Technology Data Exchange (ETDEWEB)

    Xie Chengmin; Wang Zhongfang; Wu Longsheng; Liu Youbao, E-mail: hglnew@sina.com [Computer Research and Design Department, Xi' an Microelectronic Technique Institutes, Xi' an 710054 (China)

    2011-07-15

    A novel 8T single-event-upset (SEU) hardened and high static noise margin (SNM) SRAM cell is proposed. By adding one transistor paralleled with each access transistor, the drive capability of pull-up PMOS is greater than that of the conventional cell and the read access transistors are weaker than that of the conventional cell. So the hold, read SNM and critical charge increase greatly. The simulation results show that the critical charge is almost three times larger than that of the conventional 6T cell by appropriately sizing the pull-up transistors. The hold and read SNM of the new cell increase by 72% and 141.7%, respectively, compared to the 6T design, but it has a 54% area overhead and read performance penalty. According to these features, this novel cell suits high reliability applications, such as aerospace and military. (semiconductor integrated circuits)

  4. A novel high reliability CMOS SRAM cell

    International Nuclear Information System (INIS)

    Xie Chengmin; Wang Zhongfang; Wu Longsheng; Liu Youbao

    2011-01-01

    A novel 8T single-event-upset (SEU) hardened and high static noise margin (SNM) SRAM cell is proposed. By adding one transistor paralleled with each access transistor, the drive capability of pull-up PMOS is greater than that of the conventional cell and the read access transistors are weaker than that of the conventional cell. So the hold, read SNM and critical charge increase greatly. The simulation results show that the critical charge is almost three times larger than that of the conventional 6T cell by appropriately sizing the pull-up transistors. The hold and read SNM of the new cell increase by 72% and 141.7%, respectively, compared to the 6T design, but it has a 54% area overhead and read performance penalty. According to these features, this novel cell suits high reliability applications, such as aerospace and military. (semiconductor integrated circuits)

  5. Uv laser triggering of high-voltage gas switches

    International Nuclear Information System (INIS)

    Woodworth, J.R.; Frost, C.A.; Green, T.A.

    1982-01-01

    Two different techniques are discussed for uv laser triggering of high-voltage gas switches using a KrF laser (248 nm) to create an ionized channel through the dielectric gas in a spark gap. One technique uses an uv laser to induce breakdown in SF 6 . For this technique, we present data that demonstrate a 1-sigma jitter of +- 150 ps for a 0.5-MV switch at 80% of its self-breakdown voltage using a low-divergence KrF laser. The other scheme uses additives to the normal dielectric gas, such as tripropylamine, which are selected to undergo resonant two-step ionization in the uv laser field

  6. Multi-threading in the ATLAS High-Level Trigger

    CERN Document Server

    Barton, Adam Edward; The ATLAS collaboration

    2017-01-01

    Over the next decade of LHC data-taking the instantaneous luminosity will reach up 7.5 times the design value with over 200 interactions per bunch-crossing and will pose unprecedented challenges for the ATLAS trigger system. We report on an HLT prototype in which the need for HLT­specific components has been reduced to a minimum while retaining the key aspects of trigger functionality including regional reconstruction and early event rejection. We report on the first experience of migrating trigger algorithms to this new framework and present the next steps towards a full implementation of the ATLAS trigger within AthenaMT.

  7. High reliability megawatt transformer/rectifier

    Science.gov (United States)

    Zwass, Samuel; Ashe, Harry; Peters, John W.

    1991-01-01

    The goal of the two phase program is to develop the technology and design and fabricate ultralightweight high reliability DC to DC converters for space power applications. The converters will operate from a 5000 V dc source and deliver 1 MW of power at 100 kV dc. The power weight density goal is 0.1 kg/kW. The cycle to cycle voltage stability goals was + or - 1 percent RMS. The converter is to operate at an ambient temperature of -40 C with 16 minute power pulses and one hour off time. The uniqueness of the design in Phase 1 resided in the dc switching array which operates the converter at 20 kHz using Hollotron plasma switches along with a specially designed low loss, low leakage inductance and a light weight high voltage transformer. This approach reduced considerably the number of components in the converter thereby increasing the system reliability. To achieve an optimum transformer for this application, the design uses four 25 kV secondary windings to produce the 100 kV dc output, thus reducing the transformer leakage inductance, and the ac voltage stresses. A specially designed insulation system improves the high voltage dielectric withstanding ability and reduces the insulation path thickness thereby reducing the component weight. Tradeoff studies and tests conducted on scaled-down model circuits and using representative coil insulation paths have verified the calculated transformer wave shape parameters and the insulation system safety. In Phase 1 of the program a converter design approach was developed and a preliminary transformer design was completed. A fault control circuit was designed and a thermal profile of the converter was also developed.

  8. Electronics and triggering challenges for the CMS High Granularity Calorimeter

    Science.gov (United States)

    Lobanov, A.

    2018-02-01

    The High Granularity Calorimeter (HGCAL), presently being designed by the CMS collaboration to replace the CMS endcap calorimeters for the High Luminosity phase of LHC, will feature six million channels distributed over 52 longitudinal layers. The requirements for the front-end electronics are extremely challenging, including high dynamic range (0.2 fC-10 pC), low noise (~2000 e- to be able to calibrate on single minimum ionising particles throughout the detector lifetime) and low power consumption (~20 mW/channel), as well as the need to select and transmit trigger information with a high granularity. Exploiting the intrinsic precision-timing capabilities of silicon sensors also requires careful design of the front-end electronics as well as the whole system, particularly clock distribution. The harsh radiation environment and requirement to keep the whole detector as dense as possible will require novel solutions to the on-detector electronics layout. Processing the data from the HGCAL imposes equally large challenges on the off-detector electronics, both for the hardware and incorporated algorithms. We present an overview of the complete electronics architecture, as well as the performance of prototype components and algorithms.

  9. Electronics and triggering challenges for the CMS High Granularity Calorimeter

    CERN Document Server

    Lobanov, Artur

    2017-01-01

    The High Granularity Calorimeter (HGCAL), presently being designed by the CMS collaboration to replace the CMS endcap calorimeters for the High Luminosity phase of LHC, will feature six million channels distributed over 52 longitudinal layers. The requirements for the front-end electronics are extremely challenging, including high dynamic range (0-10 pC), low noise (~2000e- to be able to calibrate on single minimum ionising particles throughout the detector lifetime) and low power consumption (~10mW/channel), as well as the need to select and transmit trigger information with a high granularity. Exploiting the intrinsic precision-timing capabilities of silicon sensors also requires careful design of the front-end electronics as well as the whole system, particularly clock distribution. The harsh radiation environment and requirement to keep the whole detector as dense as possible will require novel solutions to the on-detector electronics layout. Processing all the data from the HGCAL imposes equally large ch...

  10. Mechanical reliability of bulk high Tc superconductors

    International Nuclear Information System (INIS)

    Freiman, S.W.

    1990-01-01

    Most prospective applications for high T c superconductors in bulk form, e.g. magnets, motors, will require appreciable mechanical strength. Work at NIST [National Institute of Standards and Technology] has begun to address issues related to mechanical reliability. For example, recent studies on Ba-Y-Cu-O have shown that the intrinsic crack growth resistance, K IC , of crystals of this material is even smaller than was first reported, less than that of window glass, and is sensitive to moisture. Processing conditions, particularly sintering and annealing atmosphere, have been shown to have a major influence on microstructure and internal stresses in the material. Large internal stresses result from the tetragonal to orthorhombic phase transformation as well as the thermal expansion anisotropy in the grains of the ceramic. Because stress relief is absent, microcracks form which have a profound influence on strength

  11. Learning Organizations in High Reliability Industries

    International Nuclear Information System (INIS)

    Schwalbe, D.; Wächter, C.

    2016-01-01

    Full text: Humans make mistakes. Sometimes we learn from them. In a high reliability organization we have to learn before an error leads to an incident (or even accident). Therefore the “human factor” is most important as most of the time the human is the last line of defense. The “human factor” is more than communication or leadership skills. At the end, it is the personal attitude. This attitude has to be safety minded. And this attitude has to be self-reflected continuously. Moreover, feedback from others is urgently needed to improve one’s personal skills daily and learn from our own experience as well as from others. (author

  12. Low vs. high haemoglobin trigger for transfusion in vascular surgery

    DEFF Research Database (Denmark)

    Møller, A; Nielsen, H B; Wetterslev, J

    2017-01-01

    of the infrarenal aorta or infrainguinal arterial bypass surgery undergo a web-based randomisation to one of two groups: perioperative RBC transfusion triggered by hb ...-up of serious adverse events in the Danish National Patient Register within 90 days is pending. DISCUSSION: This trial is expected to determine whether a RBC transfusion triggered by hb

  13. A real-time high level trigger system for CALIFA

    Energy Technology Data Exchange (ETDEWEB)

    Gernhaeuser, Roman; Heiss, Benjamin; Klenze, Philipp; Remmels, Patrick; Winkel, Max [Physik Department, Technische Universitaet Muenchen (Germany)

    2016-07-01

    The CALIFA calorimeter with its about 2600 scintillator crystals is a key component of the R{sup 3}B setup. For many experiments CALIFA will have to perform complex trigger decisions depending on the total energy deposition, γ multiplicities or geometrical patterns with a minimal latency. This selection is an essential tool for the accurate preselection of relevant events and provides a significant data reduction. The challenge is to aggregate local trigger information from up to 200 readout modules. The trigger tree transport protocol (T{sup 3}P) will use dedicated FPGA boards and bus systems to collect trigger information and perform hierarchical summations to ensure a trigger decision within 1 μs. The basic concept and implementation of T{sup 3}P are presented together with first tests on a prototype system.

  14. High reliability fuel in the US

    International Nuclear Information System (INIS)

    Neuhold, R.J.; Leggett, R.D.; Walters, L.C.; Matthews, R.B.

    1986-05-01

    The fuels development program of the United States is described for liquid metal reactors (LMR's). The experience base, status and future potential are discussed for the three systems - oxide, metal and carbide - that have proved to have high reliability. Information is presented showing burnup capability of the oxide fuel system in a large core, e.g., FFTF, to be 150 MWd/kgM with today's technology with the potential for a capability as high as 300 MWd/kgM. Data provided for the metal fuel system show 8 at. % being routinely achieved as the EBR-II driver fuel with good potential for extending this to 15 at. % since special test pins have already exceeded this burnup level. The data included for the carbide fuel system are from pin and assembly irradiations in EBR-II and FFTF, respectively. Burnup to 12 at. % appears readily achievable with burnups to 20 at. % being demonstrated in a few pins. Efforts continue on all three systems with the bulk of the activity on metal and oxide

  15. Systems reliability in high risk situations

    International Nuclear Information System (INIS)

    Hunns, D.M.

    1974-12-01

    A summary is given of five papers and the discussion of a seminar promoted by the newly-formed National Centre of Systems Reliability. The topics covered include hazard analysis, reliability assessment, and risk assessment in both nuclear and non-nuclear industries. (U.K.)

  16. Optically triggered high voltage switch network and method for switching a high voltage

    Science.gov (United States)

    El-Sharkawi, Mohamed A.; Andexler, George; Silberkleit, Lee I.

    1993-01-19

    An optically triggered solid state switch and method for switching a high voltage electrical current. A plurality of solid state switches (350) are connected in series for controlling electrical current flow between a compensation capacitor (112) and ground in a reactive power compensator (50, 50') that monitors the voltage and current flowing through each of three distribution lines (52a, 52b and 52c), which are supplying three-phase power to one or more inductive loads. An optical transmitter (100) controlled by the reactive power compensation system produces light pulses that are conveyed over optical fibers (102) to a switch driver (110') that includes a plurality of series connected optical triger circuits (288). Each of the optical trigger circuits controls a pair of the solid state switches and includes a plurality of series connected resistors (294, 326, 330, and 334) that equalize or balance the potential across the plurality of trigger circuits. The trigger circuits are connected to one of the distribution lines through a trigger capacitor (340). In each switch driver, the light signals activate a phototransistor (300) so that an electrical current flows from one of the energy reservoir capacitors through a pulse transformer (306) in the trigger circuit, producing gate signals that turn on the pair of serially connected solid state switches (350).

  17. Optically triggered high voltage switch network and method for switching a high voltage

    Energy Technology Data Exchange (ETDEWEB)

    El-Sharkawi, Mohamed A. (Renton, WA); Andexler, George (Everett, WA); Silberkleit, Lee I. (Mountlake Terrace, WA)

    1993-01-19

    An optically triggered solid state switch and method for switching a high voltage electrical current. A plurality of solid state switches (350) are connected in series for controlling electrical current flow between a compensation capacitor (112) and ground in a reactive power compensator (50, 50') that monitors the voltage and current flowing through each of three distribution lines (52a, 52b and 52c), which are supplying three-phase power to one or more inductive loads. An optical transmitter (100) controlled by the reactive power compensation system produces light pulses that are conveyed over optical fibers (102) to a switch driver (110') that includes a plurality of series connected optical triger circuits (288). Each of the optical trigger circuits controls a pair of the solid state switches and includes a plurality of series connected resistors (294, 326, 330, and 334) that equalize or balance the potential across the plurality of trigger circuits. The trigger circuits are connected to one of the distribution lines through a trigger capacitor (340). In each switch driver, the light signals activate a phototransistor (300) so that an electrical current flows from one of the energy reservoir capacitors through a pulse transformer (306) in the trigger circuit, producing gate signals that turn on the pair of serially connected solid state switches (350).

  18. A high reliability oxygen deficiency monitoring system

    International Nuclear Information System (INIS)

    Parry, R.; Claborn, G.; Haas, A.; Landis, R.; Page, W.; Smith, J.

    1993-01-01

    The escalating use of cryogens at national laboratories in general and accelerators in particular, along with the increased emphasis placed on personnel safety, mandates the development and installation of oxygen monitoring systems to insure personnel safety in the event of a cryogenic leak. Numerous vendors offer oxygen deficiency monitoring systems but fail to provide important features and/or flexibility. This paper describes a unique oxygen monitoring system developed for the Magnet Test Laboratory (MTL) at the Superconducting Super Collider Laboratory (SSCL). Features include: high reliability, oxygen cell redundancy, sensor longevity, simple calibration, multiple trip points, offending sensor audio and visual indication, global alarms for building evacuation, local and remote analog readout, event and analog data logging, EMAIL event notification, phone line voice status system, and multi-drop communications network capability for reduced cable runs. Of particular importance is the distributed topology of the system which allows it to operate in a stand-alone configuration or to communicate with a host computer. This flexibility makes it ideal for small applications such as a small room containing a cryogenic dewar, as well as larger systems which monitor many offices and labs in several buildings

  19. A high reliability oxygen deficiency monitoring system

    International Nuclear Information System (INIS)

    Parry, R.; Claborn, G.; Haas, A.; Landis, R.; Page, W.; Smith, J.

    1993-05-01

    The escalating use of cryogens at national laboratories in general and accelerators in particular, along with the increased emphasis placed on personnel safety, mandates the development and installation of oxygen monitoring systems to insure personnel safety in the event of a cryogenic leak. Numerous vendors offer oxygen deficiency monitoring systems but fail to provide important features and/or flexibility. This paper describes a unique oxygen monitoring system developed for the Magnet Test Laboratory (MTL) at the Superconducting Super Collider Laboratory (SSCL). Features include: high reliability, oxygen cell redundancy, sensor longevity, simple calibration, multiple trip points, offending sensor audio and visual indication, global alarms for building evacuation, local and remote analog readout, event and analog data logging, EMAIL event notification, phone line voice status system, and multi-drop communications network capability for reduced cable runs. Of particular importance is the distributed topology of the system which allows it to operate in a stand-alone configuration or to communicate with a host computer. This flexibility makes it ideal for small applications such as a small room containing a cryogenic dewar, as well as larger systems which monitor many offices and labs in several buildings

  20. High power klystrons for efficient reliable high power amplifiers

    Science.gov (United States)

    Levin, M.

    1980-11-01

    This report covers the design of reliable high efficiency, high power klystrons which may be used in both existing and proposed troposcatter radio systems. High Power (10 kW) klystron designs were generated in C-band (4.4 GHz to 5.0 GHz), S-band (2.5 GHz to 2.7 GHz), and L-band or UHF frequencies (755 MHz to 985 MHz). The tubes were designed for power supply compatibility and use with a vapor/liquid phase heat exchanger. Four (4) S-band tubes were developed in the course of this program along with two (2) matching focusing solenoids and two (2) heat exchangers. These tubes use five (5) tuners with counters which are attached to the focusing solenoids. A reliability mathematical model of the tube and heat exchanger system was also generated.

  1. A high-speed DAQ framework for future high-level trigger and event building clusters

    International Nuclear Information System (INIS)

    Caselle, M.; Perez, L.E. Ardila; Balzer, M.; Dritschler, T.; Kopmann, A.; Mohr, H.; Rota, L.; Vogelgesang, M.; Weber, M.

    2017-01-01

    Modern data acquisition and trigger systems require a throughput of several GB/s and latencies of the order of microseconds. To satisfy such requirements, a heterogeneous readout system based on FPGA readout cards and GPU-based computing nodes coupled by InfiniBand has been developed. The incoming data from the back-end electronics is delivered directly into the internal memory of GPUs through a dedicated peer-to-peer PCIe communication. High performance DMA engines have been developed for direct communication between FPGAs and GPUs using 'DirectGMA (AMD)' and 'GPUDirect (NVIDIA)' technologies. The proposed infrastructure is a candidate for future generations of event building clusters, high-level trigger filter farms and low-level trigger system. In this paper the heterogeneous FPGA-GPU architecture will be presented and its performance be discussed.

  2. Sub-nanosecond jitter, repetitive impulse generators for high reliability applications

    International Nuclear Information System (INIS)

    Krausse, G.J.; Sarjeant, W.J.

    1981-01-01

    Low jitter, high reliability impulse generator development has recently become of ever increasing importance for developing nuclear physics and weapons applications. The research and development of very low jitter (< 30 ps), multikilovolt generators for high reliability, minimum maintenance trigger applications utilizing a new class of high-pressure tetrode thyratrons now commercially available are described. The overall system design philosophy is described followed by a detailed analysis of the subsystem component elements. A multi-variable experimental analysis of this new tetrode thyratron was undertaken, in a low-inductance configuration, as a function of externally available parameters. For specific thyratron trigger conditions, rise times of 18 ns into 6.0-Ω loads were achieved at jitters as low as 24 ps. Using this database, an integrated trigger generator system with solid-state front-end is described in some detail. The generator was developed to serve as the Master Trigger Generator for a large neutrino detector installation at the Los Alamos Meson Physics Facility

  3. Assessment of microelectronics packaging for high temperature, high reliability applications

    Energy Technology Data Exchange (ETDEWEB)

    Uribe, F.

    1997-04-01

    This report details characterization and development activities in electronic packaging for high temperature applications. This project was conducted through a Department of Energy sponsored Cooperative Research and Development Agreement between Sandia National Laboratories and General Motors. Even though the target application of this collaborative effort is an automotive electronic throttle control system which would be located in the engine compartment, results of this work are directly applicable to Sandia`s national security mission. The component count associated with the throttle control dictates the use of high density packaging not offered by conventional surface mount. An enabling packaging technology was selected and thermal models defined which characterized the thermal and mechanical response of the throttle control module. These models were used to optimize thick film multichip module design, characterize the thermal signatures of the electronic components inside the module, and to determine the temperature field and resulting thermal stresses under conditions that may be encountered during the operational life of the throttle control module. Because the need to use unpackaged devices limits the level of testing that can be performed either at the wafer level or as individual dice, an approach to assure a high level of reliability of the unpackaged components was formulated. Component assembly and interconnect technologies were also evaluated and characterized for high temperature applications. Electrical, mechanical and chemical characterizations of enabling die and component attach technologies were performed. Additionally, studies were conducted to assess the performance and reliability of gold and aluminum wire bonding to thick film conductor inks. Kinetic models were developed and validated to estimate wire bond reliability.

  4. High Reliability Cryogenic Piezoelectric Valve Actuator, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Cryogenic fluid valves are subject to harsh exposure and actuators to drive these valves require robust performance and high reliability. DSM's piezoelectric...

  5. Reliability and Failure in NASA Missions: Blunders, Normal Accidents, High Reliability, Bad Luck

    Science.gov (United States)

    Jones, Harry W.

    2015-01-01

    NASA emphasizes crew safety and system reliability but several unfortunate failures have occurred. The Apollo 1 fire was mistakenly unanticipated. After that tragedy, the Apollo program gave much more attention to safety. The Challenger accident revealed that NASA had neglected safety and that management underestimated the high risk of shuttle. Probabilistic Risk Assessment was adopted to provide more accurate failure probabilities for shuttle and other missions. NASA's "faster, better, cheaper" initiative and government procurement reform led to deliberately dismantling traditional reliability engineering. The Columbia tragedy and Mars mission failures followed. Failures can be attributed to blunders, normal accidents, or bad luck. Achieving high reliability is difficult but possible.

  6. A video event trigger for high frame rate, high resolution video technology

    Science.gov (United States)

    Williams, Glenn L.

    1991-12-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  7. Diagnostic Systems and Resources utilization of the ATLAS High Level Trigger

    CERN Document Server

    Sidoti, A; The ATLAS collaboration; Ospanov, R

    2010-01-01

    Since the LHC started colliding protons in December 2009, the ATLAS trigger has operated very successfully with a collision rate which has increased by several orders of magnitude. The trigger monitoring and data quality infrastructure was essential to this success. We describe the software tools used to monitor the trigger system performance and assess the overall quality of the trigger selection during collisions running. ATLAS has broad physics goals which require a large number of different active triggers due to complex event topology, requiring quite sophisticated software structures and concepts. The trigger of the ATLAS experiment is built as a three level system. The first level is realized in hardware while the high level triggers (HLT) are software based and run on large PC farms. The trigger reduces the bunch crossing rate of 40 MHz, at design, to an average event rate of about 200 Hz for storage. Since the ATLAS detector is a general purpose detector, the trigger must be sensitive to a large numb...

  8. Quark fragmentation and trigger side momentum distributions in high-Psub(T) processes

    International Nuclear Information System (INIS)

    Antolin, J.; Azcoiti, V.; Bravo, J.R.; Alonso, J.L.; Cruz, A.; Ringland, G.A.

    1979-11-01

    It has been widely argued that the experimental evidence concerning the momentum accompanying high Psub(T) triggers is a grave problem for models which take the trigger hadron to be a quark fragment. It is claimed that the trigger hadron takes much too large a fraction (zsub(c)) of the jet momentum for the trigger side jet to be a quark. The jet momentum is not directly measured, but deduced from the derivative of the momentum (psub(x)) accompanying the trigger with respect to the trigger transverse momentum - psub(T)sup(t). This argument is shown to be unsafe. Using both an approximate analytic approach to illustrate the physics and subsequently a full numerical computation it is proved that the deduction of the fractional momentum accompanying the trigger, 1/zsub(c) -1, from dpsub(x)/dpsub(T)sup(t) is not correct. Further it is shown that models which do take the trigger to be a quark fragment are essentially in agreement with the data on trigger side momentum distributions. A surprising prediction of the present analysis is that psub(x) should be approximately constant for psub(T)sup(t) >= 6 GeV/c. (author)

  9. A High Reliability Frequency Stabilized Semiconductor Laser Source, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Ultrastable, narrow linewidth, high reliability MOPA sources are needed for high performance LIDARs in NASA for, wind speed measurement, surface topography and earth...

  10. High Reliability Oscillators for Terahertz Systems, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — To develop reliable THz sources with high power and high DC-RF efficiency, Virginia Diodes, Inc. will develop a thorough understanding of the complex interactions...

  11. Seeking high reliability in primary care: Leadership, tools, and organization.

    Science.gov (United States)

    Weaver, Robert R

    2015-01-01

    Leaders in health care increasingly recognize that improving health care quality and safety requires developing an organizational culture that fosters high reliability and continuous process improvement. For various reasons, a reliability-seeking culture is lacking in most health care settings. Developing a reliability-seeking culture requires leaders' sustained commitment to reliability principles using key mechanisms to embed those principles widely in the organization. The aim of this study was to examine how key mechanisms used by a primary care practice (PCP) might foster a reliability-seeking, system-oriented organizational culture. A case study approach was used to investigate the PCP's reliability culture. The study examined four cultural artifacts used to embed reliability-seeking principles across the organization: leadership statements, decision support tools, and two organizational processes. To decipher their effects on reliability, the study relied on observations of work patterns and the tools' use, interactions during morning huddles and process improvement meetings, interviews with clinical and office staff, and a "collective mindfulness" questionnaire. The five reliability principles framed the data analysis. Leadership statements articulated principles that oriented the PCP toward a reliability-seeking culture of care. Reliability principles became embedded in the everyday discourse and actions through the use of "problem knowledge coupler" decision support tools and daily "huddles." Practitioners and staff were encouraged to report unexpected events or close calls that arose and which often initiated a formal "process change" used to adjust routines and prevent adverse events from recurring. Activities that foster reliable patient care became part of the taken-for-granted routine at the PCP. The analysis illustrates the role leadership, tools, and organizational processes play in developing and embedding a reliable-seeking culture across an

  12. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  13. High-reliability health care: getting there from here.

    Science.gov (United States)

    Chassin, Mark R; Loeb, Jerod M

    2013-09-01

    Despite serious and widespread efforts to improve the quality of health care, many patients still suffer preventable harm every day. Hospitals find improvement difficult to sustain, and they suffer "project fatigue" because so many problems need attention. No hospitals or health systems have achieved consistent excellence throughout their institutions. High-reliability science is the study of organizations in industries like commercial aviation and nuclear power that operate under hazardous conditions while maintaining safety levels that are far better than those of health care. Adapting and applying the lessons of this science to health care offer the promise of enabling hospitals to reach levels of quality and safety that are comparable to those of the best high-reliability organizations. We combined the Joint Commission's knowledge of health care organizations with knowledge from the published literature and from experts in high-reliability industries and leading safety scholars outside health care. We developed a conceptual and practical framework for assessing hospitals' readiness for and progress toward high reliability. By iterative testing with hospital leaders, we refined the framework and, for each of its fourteen components, defined stages of maturity through which we believe hospitals must pass to reach high reliability. We discovered that the ways that high-reliability organizations generate and maintain high levels of safety cannot be directly applied to today's hospitals. We defined a series of incremental changes that hospitals should undertake to progress toward high reliability. These changes involve the leadership's commitment to achieving zero patient harm, a fully functional culture of safety throughout the organization, and the widespread deployment of highly effective process improvement tools. Hospitals can make substantial progress toward high reliability by undertaking several specific organizational change initiatives. Further research

  14. High-Reliability Health Care: Getting There from Here

    Science.gov (United States)

    Chassin, Mark R; Loeb, Jerod M

    2013-01-01

    Context Despite serious and widespread efforts to improve the quality of health care, many patients still suffer preventable harm every day. Hospitals find improvement difficult to sustain, and they suffer “project fatigue” because so many problems need attention. No hospitals or health systems have achieved consistent excellence throughout their institutions. High-reliability science is the study of organizations in industries like commercial aviation and nuclear power that operate under hazardous conditions while maintaining safety levels that are far better than those of health care. Adapting and applying the lessons of this science to health care offer the promise of enabling hospitals to reach levels of quality and safety that are comparable to those of the best high-reliability organizations. Methods We combined the Joint Commission's knowledge of health care organizations with knowledge from the published literature and from experts in high-reliability industries and leading safety scholars outside health care. We developed a conceptual and practical framework for assessing hospitals’ readiness for and progress toward high reliability. By iterative testing with hospital leaders, we refined the framework and, for each of its fourteen components, defined stages of maturity through which we believe hospitals must pass to reach high reliability. Findings We discovered that the ways that high-reliability organizations generate and maintain high levels of safety cannot be directly applied to today's hospitals. We defined a series of incremental changes that hospitals should undertake to progress toward high reliability. These changes involve the leadership's commitment to achieving zero patient harm, a fully functional culture of safety throughout the organization, and the widespread deployment of highly effective process improvement tools. Conclusions Hospitals can make substantial progress toward high reliability by undertaking several specific

  15. Frameworks to monitor and predict resource usage in the ATLAS High Level Trigger

    CERN Document Server

    Martin, Tim; The ATLAS collaboration

    2016-01-01

    The ATLAS High Level Trigger Farm consists of around 30,000 CPU cores which filter events at up to 100 kHz input rate. A costing framework is built into the high level trigger, this enables detailed monitoring of the system and allows for data-driven predictions to be made utilising specialist datasets. This talk will present an overview of how ATLAS collects in-situ monitoring data on both CPU usage and dataflow over the data-acquisition network during the trigger execution, and how these data are processed to yield both low level monitoring of individual selection-algorithms and high level data on the overall performance of the farm. For development and prediction purposes, ATLAS uses a special `Enhanced Bias' event selection. This mechanism will be explained along with how is used to profile expected resource usage and output event-rate of new physics selections, before they are executed on the actual high level trigger farm.

  16. Frameworks to monitor and predict rates and resource usage in the ATLAS High Level Trigger

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219969; The ATLAS collaboration

    2017-01-01

    The ATLAS High Level Trigger Farm consists of around 40,000 CPU cores which filter events at an input rate of up to 100 kHz. A costing framework is built into the high level trigger thus enabling detailed monitoring of the system and allowing for data-driven predictions to be made utilising specialist datasets. An overview is presented in to how ATLAS collects in-situ monitoring data on CPU usage during the trigger execution, and how these data are processed to yield both low level monitoring of individual selection-algorithms and high level data on the overall performance of the farm. For development and prediction purposes, ATLAS uses a special ‘Enhanced Bias’ event selection. This mechanism is explained along with how it is used to profile expected resource usage and output event rate of new physics selections, before they are executed on the actual high level trigger farm.

  17. Tracking and flavour tagging selection in the ATLAS High Level Trigger

    CERN Document Server

    Calvetti, Milene; The ATLAS collaboration

    2017-01-01

    In high-energy physics experiments, track based selection in the online environment is crucial for the detection of physics processes of interest for further study. This is of particular importance at the Large Hadron Collider (LHC), where the increasingly harsh collision environment is challenging participating experiments to improve the performance of their online selection. Principle among these challenges is the increasing number of interactions per bunch crossing, known as pileup. In the ATLAS experiment the challenge has been addressed with multiple strategies. Firstly, individual trigger groups focusing on specific physics objects have implemented novel algorithms which make use of the detailed tracking and vertexing performed within the trigger to improve rejection without losing efficiency. Secondly, since 2015 all trigger areas have also benefited from a new high performance inner detector software tracking system implemented in the High Level Trigger. Finally, performance will be further enhanced i...

  18. A Track Reconstructing Low-latency Trigger Processor for High-energy Physics

    CERN Document Server

    AUTHOR|(CDS)2067518

    2009-01-01

    The detection and analysis of the large number of particles emerging from high-energy collisions between atomic nuclei is a major challenge in experimental heavy-ion physics. Efficient trigger systems help to focus the analysis on relevant events. A primary objective of the Transition Radiation Detector of the ALICE experiment at the LHC is to trigger on high-momentum electrons. In this thesis, a trigger processor is presented that employs massive parallelism to perform the required online event reconstruction within 2 µs to contribute to the Level-1 trigger decision. Its three-stage hierarchical architecture comprises 109 nodes based on FPGA technology. Ninety processing nodes receive data from the detector front-end at an aggregate net bandwidth of 2.16 Tbps via 1080 optical links. Using specifically developed components and interconnections, the system combines high bandwidth with minimum latency. The employed tracking algorithm three-dimensionally reassembles the track segments found in the detector's dr...

  19. Ultra Secure High Reliability Wireless Radiation Monitor

    International Nuclear Information System (INIS)

    Cordaro, J.; Shull, D.; Farrar, M.; Reeves, G.

    2011-01-01

    Radiation monitoring in nuclear facilities is essential to safe operation of the equipment as well as protecting personnel. In specific, typical air monitoring of radioactive gases or particulate involves complex systems of valves, pumps, piping and electronics. The challenge is to measure a representative sample in areas that are radioactively contaminated. Running cables and piping to these locations is very expensive due to the containment requirements. Penetration into and out of an airborne or containment area is complex and costly. The process rooms are built with thick rebar-enforced concrete walls with glove box containment chambers inside. Figure 1 shows high temperature radiation resistance cabling entering the top of a typical glove box. In some case, the entire processing area must be contained in a 'hot cell' where the only access into the chamber is via manipulators. An example is shown in Figure 2. A short range wireless network provides an ideal communication link for transmitting the data from the radiation sensor to a 'clean area', or area absent of any radiation fields or radioactive contamination. Radiation monitoring systems that protect personnel and equipment must meet stringent codes and standards due to the consequences of failure. At first glance a wired system would seem more desirable. Concerns with wireless communication include latency, jamming, spoofing, man in the middle attacks, and hacking. The Department of Energy's Savannah River National Laboratory (SRNL) has developed a prototype wireless radiation air monitoring system that address many of the concerns with wireless and allows quick deployment in radiation and contamination areas. It is stand alone and only requires a standard 120 VAC, 60 Hz power source. It is designed to be mounted or portable. The wireless link uses a National Security Agency (NSA) Suite B compliant wireless network from Fortress Technologies that is considered robust enough to be used for classified data

  20. ULTRA SECURE HIGH RELIABILITY WIRELESS RADIATION MONITOR

    Energy Technology Data Exchange (ETDEWEB)

    Cordaro, J.; Shull, D.; Farrar, M.; Reeves, G.

    2011-08-03

    Radiation monitoring in nuclear facilities is essential to safe operation of the equipment as well as protecting personnel. In specific, typical air monitoring of radioactive gases or particulate involves complex systems of valves, pumps, piping and electronics. The challenge is to measure a representative sample in areas that are radioactively contaminated. Running cables and piping to these locations is very expensive due to the containment requirements. Penetration into and out of an airborne or containment area is complex and costly. The process rooms are built with thick rebar-enforced concrete walls with glove box containment chambers inside. Figure 1 shows high temperature radiation resistance cabling entering the top of a typical glove box. In some case, the entire processing area must be contained in a 'hot cell' where the only access into the chamber is via manipulators. An example is shown in Figure 2. A short range wireless network provides an ideal communication link for transmitting the data from the radiation sensor to a 'clean area', or area absent of any radiation fields or radioactive contamination. Radiation monitoring systems that protect personnel and equipment must meet stringent codes and standards due to the consequences of failure. At first glance a wired system would seem more desirable. Concerns with wireless communication include latency, jamming, spoofing, man in the middle attacks, and hacking. The Department of Energy's Savannah River National Laboratory (SRNL) has developed a prototype wireless radiation air monitoring system that address many of the concerns with wireless and allows quick deployment in radiation and contamination areas. It is stand alone and only requires a standard 120 VAC, 60 Hz power source. It is designed to be mounted or portable. The wireless link uses a National Security Agency (NSA) Suite B compliant wireless network from Fortress Technologies that is considered robust enough to be

  1. Development of high velocity gas gun with a new trigger system-numerical analysis

    Science.gov (United States)

    Husin, Z.; Homma, H.

    2018-02-01

    In development of high performance armor vests, we need to carry out well controlled experiments using bullet speed of more than 900 m/sec. After reviewing trigger systems used for high velocity gas guns, this research intends to develop a new trigger system, which can realize precise and reproducible impact tests at impact velocity of more than 900 m/sec. A new trigger system developed here is called a projectile trap. A projectile trap is placed between a reservoir and a barrel. A projectile trap has two functions of a sealing disk and triggering. Polyamidimide is selected for the trap material and dimensions of the projectile trap are determined by numerical analysis for several levels of launching pressure to change the projectile velocity. Numerical analysis results show that projectile trap designed here can operate reasonably and stresses caused during launching operation are less than material strength. It means a projectile trap can be reused for the next shooting.

  2. Multi-threaded algorithms for GPGPU in the ATLAS High Level Trigger

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00212700; The ATLAS collaboration

    2017-01-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significa...

  3. Simulation of the High Performance Time to Digital Converter for the ATLAS Muon Spectrometer trigger upgrade

    International Nuclear Information System (INIS)

    Meng, X.T.; Levin, D.S.; Chapman, J.W.; Zhou, B.

    2016-01-01

    The ATLAS Muon Spectrometer endcap thin-Resistive Plate Chamber trigger project compliments the New Small Wheel endcap Phase-1 upgrade for higher luminosity LHC operation. These new trigger chambers, located in a high rate region of ATLAS, will improve overall trigger acceptance and reduce the fake muon trigger incidence. These chambers must generate a low level muon trigger to be delivered to a remote high level processor within a stringent latency requirement of 43 bunch crossings (1075 ns). To help meet this requirement the High Performance Time to Digital Converter (HPTDC), a multi-channel ASIC designed by CERN Microelectronics group, has been proposed for the digitization of the fast front end detector signals. This paper investigates the HPTDC performance in the context of the overall muon trigger latency, employing detailed behavioral Verilog simulations in which the latency in triggerless mode is measured for a range of configurations and under realistic hit rate conditions. The simulation results show that various HPTDC operational configurations, including leading edge and pair measurement modes can provide high efficiency (>98%) to capture and digitize hits within a time interval satisfying the Phase-1 latency tolerance.

  4. High reliability low jitter 80 kV pulse generator

    Directory of Open Access Journals (Sweden)

    M. E. Savage

    2009-08-01

    Full Text Available Switching can be considered to be the essence of pulsed power. Time accurate switch/trigger systems with low inductance are useful in many applications. This article describes a unique switch geometry coupled with a low-inductance capacitive energy store. The system provides a fast-rising high voltage pulse into a low impedance load. It can be challenging to generate high voltage (more than 50 kilovolts into impedances less than 10  Ω, from a low voltage control signal with a fast rise time and high temporal accuracy. The required power amplification is large, and is usually accomplished with multiple stages. The multiple stages can adversely affect the temporal accuracy and the reliability of the system. In the present application, a highly reliable and low jitter trigger generator was required for the Z pulsed-power facility [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats,J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, and J. R. Woodworth, 2007 IEEE Pulsed Power Conference, Albuquerque, NM (IEEE, Piscataway, NJ, 2007, p. 979]. The large investment in each Z experiment demands low prefire probability and low jitter simultaneously. The system described here is based on a 100 kV DC-charged high-pressure spark gap, triggered with an ultraviolet laser. The system uses a single optical path for simultaneously triggering two parallel switches, allowing lower inductance and electrode erosion with a simple optical system. Performance of the system includes 6 ns output rise time into 5.6  Ω, 550 ps one-sigma jitter measured from the 5 V trigger to the high voltage output, and misfire probability less than 10^{-4}. The design of the system and some key measurements will be shown in the paper. We will discuss the

  5. A new kind high-reliability digital reactivity meter

    International Nuclear Information System (INIS)

    Shen Feng; Jiang Zongbing

    2001-01-01

    The paper introduces a new kind of high-reliability Digital Reactivity Meter developed by the DRM developing group in designing department of Nuclear Power Institute of China. The meter has two independent measure channels, which can be set as either master-slave structure or working independently. This structure will ensure that the meter can continually fulfill its online measure task under the condition of single failure with it. It provides a solution for the conflict between nuclear station's extreme demand in DRM's reliability and instability of computer's business software platform. The instrument reaches both advance and reliability by covering a lot of kinds of complex functions in data process and display

  6. Analysis and realization of a high resolution trigger for DM2 experiment

    International Nuclear Information System (INIS)

    Bertrand, J.L.

    1984-01-01

    The construction of a high resolution trigger has been carried out from its theoretical design to building. The term trigger is applied to an almost real-time system for track filtering in particle detection. Curved tracks are detected (with a magnetic field) and the detector is of a revolution symmetry type. The concept of a ''hybrid'' trigger with features in between those of the so-called ''CELLO R0'' and ''MARK II'' types is launched. It allows a positive versatility for the optimization of the different features. Besides a specific structure, some hardware and software implements have been designed for development and tests. The ''TRIGGER LENT'' is presently in operation in the DM2 experiment [fr

  7. L1Track: A fast Level 1 track trigger for the ATLAS high luminosity upgrade

    International Nuclear Information System (INIS)

    Cerri, Alessandro

    2016-01-01

    With the planned high-luminosity upgrade of the LHC (HL-LHC), the ATLAS detector will see its collision rate increase by approximately a factor of 5 with respect to the current LHC operation. The earliest hardware-based ATLAS trigger stage (“Level 1”) will have to provide a higher rejection factor in a more difficult environment: a new improved Level 1 trigger architecture is under study, which includes the possibility of extracting with low latency and high accuracy tracking information in time for the decision taking process. In this context, the feasibility of potential approaches aimed at providing low-latency high-quality tracking at Level 1 is discussed. - Highlights: • HL-LH requires highly performing event selection. • ATLAS is studying the implementation of tracking at the very first trigger level. • Low latency and high-quality seem to be achievable with dedicated hardware and adequate detector readout architecture.

  8. High frame rate retrospectively triggered Cine MRI for assessment of murine diastolic function.

    Science.gov (United States)

    Coolen, Bram F; Abdurrachim, Desiree; Motaal, Abdallah G; Nicolay, Klaas; Prompers, Jeanine J; Strijkers, Gustav J

    2013-03-01

    To assess left ventricular (LV) diastolic function in mice with Cine MRI, a high frame rate (>60 frames per cardiac cycle) is required. For conventional electrocardiography-triggered Cine MRI, the frame rate is inversely proportional to the pulse repetition time (TR). However, TR cannot be lowered at will to increase the frame rate because of gradient hardware, spatial resolution, and signal-to-noise limitations. To overcome these limitations associated with electrocardiography-triggered Cine MRI, in this paper, we introduce a retrospectively triggered Cine MRI protocol capable of producing high-resolution high frame rate Cine MRI of the mouse heart for addressing left ventricular diastolic function. Simulations were performed to investigate the influence of MRI sequence parameters and the k-space filling trajectory in relation to the desired number of frames per cardiac cycle. An optimized protocol was applied in vivo and compared with electrocardiography-triggered Cine for which a high-frame rate could only be achieved by several interleaved acquisitions. Retrospective high frame rate Cine MRI proved superior to the interleaved electrocardiography-triggered protocols. High spatial-resolution Cine movies with frames rates up to 80 frames per cardiac cycle were obtained in 25 min. Analysis of left ventricular filling rate curves allowed accurate determination of early and late filling rates and revealed subtle impairments in left ventricular diastolic function of diabetic mice in comparison with nondiabetic mice. Copyright © 2012 Wiley Periodicals, Inc.

  9. The development of high-voltage repetitive low-jitter corona stabilized triggered switch

    Science.gov (United States)

    Geng, Jiuyuan; Yang, Jianhua; Cheng, Xinbing; Yang, Xiao; Chen, Rong

    2018-04-01

    The high-power switch plays an important part in a pulse power system. With the trend of pulse power technology toward modularization, miniaturization, and accuracy control, higher requirements on electrical trigger and jitter of the switch have been put forward. A high-power low-jitter corona-stabilized triggered switch (CSTS) is designed in this paper. This kind of CSTS is based on corona stabilized mechanism, and it can be used as a main switch of an intense electron-beam accelerator (IEBA). Its main feature was the use of an annular trigger electrode instead of a traditional needle-like trigger electrode, taking main and side trigger rings to fix the discharging channels and using SF6/N2 gas mixture as its operation gas. In this paper, the strength of the local field enhancement was changed by a trigger electrode protrusion length Dp. The differences of self-breakdown voltage and its stability, delay time jitter, trigger requirements, and operation range of the switch were compared. Then the effect of different SF6/N2 mixture ratio on switch performance was explored. The experimental results show that when the SF6 is 15% with the pressure of 0.2 MPa, the hold-off voltage of the switch is 551 kV, the operating range is 46.4%-93.5% of the self-breakdown voltage, the jitter is 0.57 ns, and the minimum trigger voltage requirement is 55.8% of the peak. At present, the CSTS has been successfully applied to an IEBA for long time operation.

  10. Concepts and design of the CMS high granularity calorimeter Level-1 trigger

    CERN Document Server

    Sauvan, Jean-Baptiste

    2016-01-01

    The CMS experiment has chosen a novel high granularity calorimeter for the forward region as part of its planned upgrade for the high luminosity LHC. The calorimeter will have a fine segmentation in both the transverse and longitudinal directions and will be the first such calorimeter specifically optimised for particle flow reconstruction to operate at a colliding beam experiment. The high granularity results in around six million readout channels in total and so presents a significant challenge in terms of data manipulation and processing for the trigger; the trigger data volumes will be an order of magnitude above those currently handled at CMS. In addition, the high luminosity will result in an average of 140 to 200 interactions per bunch crossing, giving a huge background rate in the forward region that needs to be efficiently reduced by the trigger algorithms. Efficient data reduction and reconstruction algorithms making use of the fine segmentation of the detector have been simulated and evaluated. The...

  11. Multi-Threaded Algorithms for GPGPU in the ATLAS High Level Trigger

    Science.gov (United States)

    Conde Muíño, P.; ATLAS Collaboration

    2017-10-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant challenge that will increase significantly with future LHC upgrades. During the LHC data taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further to 7.5 times the design value in 2026 following LHC and ATLAS upgrades. Corresponding improvements in the speed of the reconstruction code will be needed to provide the required trigger selection power within affordable computing resources. Key factors determining the potential benefit of including GPGPU as part of the HLT processor farm are: the relative speed of the CPU and GPGPU algorithm implementations; the relative execution times of the GPGPU algorithms and serial code remaining on the CPU; the number of GPGPU required, and the relative financial cost of the selected GPGPU. We give a brief overview of the algorithms implemented and present new measurements that compare the performance of various configurations exploiting GPGPU cards.

  12. High-reliability computing for the smarter planet

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Graham, Paul; Manuzzato, Andrea; Dehon, Andre

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary

  13. High-reliability computing for the smarter planet

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Graham, Paul [Los Alamos National Laboratory; Manuzzato, Andrea [UNIV OF PADOVA; Dehon, Andre [UNIV OF PENN; Carter, Nicholas [INTEL CORPORATION

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is

  14. Tracking and flavour tagging selection in the ATLAS High Level Trigger

    CERN Document Server

    Calvetti, Milene; The ATLAS collaboration

    2017-01-01

    In high-energy physics experiments, track based selection in the online environment is crucial for the efficient real time selection of the rare physics process of interest. This is of particular importance at the Large Hadron Collider (LHC), where the increasingly harsh collision environment is challenging the experiments to improve the performance of their online selection. Principal among these challenges is the increasing number of interactions per bunch crossing, known as pileup. In the ATLAS experiment the challenge has been addressed with multiple strategies. Firstly, specific trigger objects have been improved by building algorithms using detailed tracking and vertexing in specific detector regions to improve background rejection without loosing signal efficiency. Secondly, since 2015 all trigger areas have benefited from a new high performance Inner Detector (ID) software tracking system implemented in the High Level Trigger. Finally, performance will be further enhanced in future by the installation...

  15. High School Dropout in Proximal Context: The Triggering Role of Stressful Life Events

    Science.gov (United States)

    Dupéré, Véronique; Dion, Eric; Leventhal, Tama; Archambault, Isabelle; Crosnoe, Robert; Janosz, Michel

    2018-01-01

    Adolescents who drop out of high school experience enduring negative consequences across many domains. Yet, the circumstances triggering their departure are poorly understood. This study examined the precipitating role of recent psychosocial stressors by comparing three groups of Canadian high school students (52% boys; M[subscript…

  16. A track reconstructing low-latency trigger processor for high-energy physics

    International Nuclear Information System (INIS)

    Cuveland, Jan de

    2009-01-01

    The detection and analysis of the large number of particles emerging from high-energy collisions between atomic nuclei is a major challenge in experimental heavy-ion physics. Efficient trigger systems help to focus the analysis on relevant events. A primary objective of the Transition Radiation Detector of the ALICE experiment at the LHC is to trigger on high-momentum electrons. In this thesis, a trigger processor is presented that employs massive parallelism to perform the required online event reconstruction within 2 μs to contribute to the Level-1 trigger decision. Its three-stage hierarchical architecture comprises 109 nodes based on FPGA technology. Ninety processing nodes receive data from the detector front-end at an aggregate net bandwidth of 2.16 Tbit/s via 1080 optical links. Using specifically developed components and interconnections, the system combines high bandwidth with minimum latency. The employed tracking algorithm three-dimensionally reassembles the track segments found in the detector's drift chambers based on explicit value comparisons, calculates the momentum of the originating particles from the course of the reconstructed tracks, and finally leads to a trigger decision. The architecture is capable of processing up to 20 000 track segments in less than 2 μs with high detection efficiency and reconstruction precision for high-momentum particles. As a result, this thesis shows how a trigger processor performing complex online track reconstruction within tight real-time requirements can be realized. The presented hardware has been built and is in continuous data taking operation in the ALICE experiment. (orig.)

  17. A track reconstructing low-latency trigger processor for high-energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Cuveland, Jan de

    2009-09-17

    The detection and analysis of the large number of particles emerging from high-energy collisions between atomic nuclei is a major challenge in experimental heavy-ion physics. Efficient trigger systems help to focus the analysis on relevant events. A primary objective of the Transition Radiation Detector of the ALICE experiment at the LHC is to trigger on high-momentum electrons. In this thesis, a trigger processor is presented that employs massive parallelism to perform the required online event reconstruction within 2 {mu}s to contribute to the Level-1 trigger decision. Its three-stage hierarchical architecture comprises 109 nodes based on FPGA technology. Ninety processing nodes receive data from the detector front-end at an aggregate net bandwidth of 2.16 Tbit/s via 1080 optical links. Using specifically developed components and interconnections, the system combines high bandwidth with minimum latency. The employed tracking algorithm three-dimensionally reassembles the track segments found in the detector's drift chambers based on explicit value comparisons, calculates the momentum of the originating particles from the course of the reconstructed tracks, and finally leads to a trigger decision. The architecture is capable of processing up to 20 000 track segments in less than 2 {mu}s with high detection efficiency and reconstruction precision for high-momentum particles. As a result, this thesis shows how a trigger processor performing complex online track reconstruction within tight real-time requirements can be realized. The presented hardware has been built and is in continuous data taking operation in the ALICE experiment. (orig.)

  18. Highly Efficient Moisture-Triggered Nanogenerator Based on Graphene Quantum Dots.

    Science.gov (United States)

    Huang, Yaxin; Cheng, Huhu; Shi, Gaoquan; Qu, Liangti

    2017-11-08

    A high-performance moisture triggered nanogenerator is fabricated by using graphene quantum dots (GQDs) as the active material. GQDs are prepared by direct oxidation and etching of natural graphite powder, which have small sizes of 2-5 nm and abundant oxygen-containing functional groups. After the treatment by electrochemical polarization, the GQDs-based moisture triggered nanogenerator can deliver a high voltage up to 0.27 V under 70% relative humidity variation, and a power density of 1.86 mW cm -2 with an optimized load resistor. The latter value is much higher than the moisture-electric power generators reported previously. The GQD moisture triggered nanogenerator is promising for self-power electronics and miniature sensors.

  19. Towards a Level-1 tracking trigger for the ATLAS experiment at the High Luminosity LHC

    CERN Document Server

    Martin, T A D; The ATLAS collaboration

    2014-01-01

    At the high luminosity HL-LHC, upwards of 160 individual proton-proton interactions (pileup) are expected per bunch-crossing at luminosities of around $5\\times10^{34}$ cm$^{-2}$s$^{-1}$. A proposal by the ATLAS collaboration to split the ATLAS first level trigger in to two stages is briefly detailed. The use of fast track finding in the new first level trigger is explored as a method to provide the discrimination required to reduce the event rate to acceptable levels for the read out system while maintaining high efficiency on the selection of the decay products of electroweak bosons at HL-LHC luminosities. It is shown that available bandwidth in the proposed new strip tracker is sufficiency for a region of interest based track trigger given certain optimisations, further methods for improving upon the proposal are discussed.

  20. Electronics and triggering challenges for the CMS High Granularity Calorimeter for HL-LHC

    CERN Document Server

    Borg, Johan

    2017-01-01

    The High Granularity Calorimeter (HGCAL) is presently being designedto replace the CMS endcap calorimeters for the HighLuminosity phase at LHC. It will feature six million silicon sensor channelsand 52 longitudinal layers. The requirements for the frontendelectronics include a 0.3 fC-10 pC dynamic range, low noise (2000 e-) and low power consumption (10 mW /channel).In addition, the HGCAL will perform 50 ps resolution time of arrivalmeasurements to combat the effect of the large number of interactions taking placeat each bunch crossing, and will transmit both triggered readoutfrom on-detector buffer memory and reduced resolution real-time trigger data.We present the challenges related to the frontend electronics, data transmissionand off-detector trigger preprocessing that must be overcome, and the designconcepts currently being pursued.

  1. Commissioning of the ATLAS High Level Trigger with single beam and cosmic rays

    Energy Technology Data Exchange (ETDEWEB)

    Di Mattia, A, E-mail: dimattia@mail.cern.c [Michigan State University - Department of Physics and Astronomy 3218 Biomedical Physical Science - East Lansing, MI 48824-2320 (United States)

    2010-04-01

    ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). The trigger system is responsible for making the online selection of interesting collision events. At the LHC design luminosity of 10{sup 34} cm{sup -2}s{sup -1} it will need to achieve a rejection factor of the order of 10{sup -7} against random proton-proton interactions, while selecting with high efficiency events that are needed for physics analyses. After a first processing level using custom electronics based on FPGAs and ASICs, the trigger selection is made by software running on two processor farms, containing a total of around two thousand multi-core machines. This system is known as the High Level Trigger (HLT). To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection of background events. The recent LHC startup and short single-beam run provided a 'stress test' of the system and some initial calibration data. Following this period, ATLAS continued to collect cosmic-ray events for detector alignment and calibration purposes. After giving an overview of the trigger design and its innovative features, this paper focuses on the experience gained from operating the ATLAS trigger with single LHC beams and cosmic-rays.

  2. Commissioning of the ATLAS High Level Trigger with single beam and cosmic rays

    International Nuclear Information System (INIS)

    Di Mattia, A

    2010-01-01

    ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). The trigger system is responsible for making the online selection of interesting collision events. At the LHC design luminosity of 10 34 cm -2 s -1 it will need to achieve a rejection factor of the order of 10 -7 against random proton-proton interactions, while selecting with high efficiency events that are needed for physics analyses. After a first processing level using custom electronics based on FPGAs and ASICs, the trigger selection is made by software running on two processor farms, containing a total of around two thousand multi-core machines. This system is known as the High Level Trigger (HLT). To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection of background events. The recent LHC startup and short single-beam run provided a 'stress test' of the system and some initial calibration data. Following this period, ATLAS continued to collect cosmic-ray events for detector alignment and calibration purposes. After giving an overview of the trigger design and its innovative features, this paper focuses on the experience gained from operating the ATLAS trigger with single LHC beams and cosmic-rays.

  3. Achieving High Reliability with People, Processes, and Technology.

    Science.gov (United States)

    Saunders, Candice L; Brennan, John A

    2017-01-01

    High reliability as a corporate value in healthcare can be achieved by meeting the "Quadruple Aim" of improving population health, reducing per capita costs, enhancing the patient experience, and improving provider wellness. This drive starts with the board of trustees, CEO, and other senior leaders who ingrain high reliability throughout the organization. At WellStar Health System, the board developed an ambitious goal to become a top-decile health system in safety and quality metrics. To achieve this goal, WellStar has embarked on a journey toward high reliability and has committed to Lean management practices consistent with the Institute for Healthcare Improvement's definition of a high-reliability organization (HRO): one that is committed to the prevention of failure, early identification and mitigation of failure, and redesign of processes based on identifiable failures. In the end, a successful HRO can provide safe, effective, patient- and family-centered, timely, efficient, and equitable care through a convergence of people, processes, and technology.

  4. Efficiency criteria for high reliability measured system structures

    International Nuclear Information System (INIS)

    Sal'nikov, N.L.

    2012-01-01

    The procedures of structural redundancy are usually used to develop high reliability measured systems. To estimate efficiency of such structures the criteria to compare different systems has been developed. So it is possible to develop more exact system by inspection of redundant system data unit stochastic characteristics in accordance with the developed criteria [ru

  5. Leadership in organizations with high security and reliability requirements

    International Nuclear Information System (INIS)

    Gonzalez, F.

    2013-01-01

    Developing leadership skills in organizations is the key to ensure the sustainability of excellent results in industries with high requirements safety and reliability. In order to have a model of leadership development specific to this type of organizations, Tecnatom in 2011, we initiated a project internal, to find and adapt a competency model to these requirements.

  6. Direct unavailability computation of a maintained highly reliable system

    Czech Academy of Sciences Publication Activity Database

    Briš, R.; Byczanski, Petr

    2010-01-01

    Roč. 224, č. 3 (2010), s. 159-170 ISSN 1748-0078 Grant - others:GA Mšk(CZ) MSM6198910007 Institutional research plan: CEZ:AV0Z30860518 Keywords : high reliability * availability * directed acyclic graph Subject RIV: BA - General Mathematics http:// journals .pepublishing.com/content/rtp3178l17923m46/

  7. Workshop on data acquisition and trigger system simulations for high energy physics

    International Nuclear Information System (INIS)

    1992-01-01

    This report discusses the following topics: DAQSIM: A data acquisition system simulation tool; Front end and DCC Simulations for the SDC Straw Tube System; Simulation of Non-Blocklng Data Acquisition Architectures; Simulation Studies of the SDC Data Collection Chip; Correlation Studies of the Data Collection Circuit ampersand The Design of a Queue for this Circuit; Fast Data Compression ampersand Transmission from a Silicon Strip Wafer; Simulation of SCI Protocols in Modsim; Visual Design with vVHDL; Stochastic Simulation of Asynchronous Buffers; SDC Trigger Simulations; Trigger Rates, DAQ ampersand Online Processing at the SSC; Planned Enhancements to MODSEM II ampersand SIMOBJECT -- an Overview -- R.; DAGAR -- A synthesis system; Proposed Silicon Compiler for Physics Applications; Timed -- LOTOS in a PROLOG Environment: an Algebraic language for Simulation; Modeling and Simulation of an Event Builder for High Energy Physics Data Acquisition Systems; A Verilog Simulation for the CDF DAQ; Simulation to Design with Verilog; The DZero Data Acquisition System: Model and Measurements; DZero Trigger Level 1.5 Modeling; Strategies Optimizing Data Load in the DZero Triggers; Simulation of the DZero Level 2 Data Acquisition System; A Fast Method for Calculating DZero Level 1 Jet Trigger Properties and Physics Input to DAQ Studies

  8. Workshop on data acquisition and trigger system simulations for high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-12-31

    This report discusses the following topics: DAQSIM: A data acquisition system simulation tool; Front end and DCC Simulations for the SDC Straw Tube System; Simulation of Non-Blocklng Data Acquisition Architectures; Simulation Studies of the SDC Data Collection Chip; Correlation Studies of the Data Collection Circuit & The Design of a Queue for this Circuit; Fast Data Compression & Transmission from a Silicon Strip Wafer; Simulation of SCI Protocols in Modsim; Visual Design with vVHDL; Stochastic Simulation of Asynchronous Buffers; SDC Trigger Simulations; Trigger Rates, DAQ & Online Processing at the SSC; Planned Enhancements to MODSEM II & SIMOBJECT -- an Overview -- R.; DAGAR -- A synthesis system; Proposed Silicon Compiler for Physics Applications; Timed -- LOTOS in a PROLOG Environment: an Algebraic language for Simulation; Modeling and Simulation of an Event Builder for High Energy Physics Data Acquisition Systems; A Verilog Simulation for the CDF DAQ; Simulation to Design with Verilog; The DZero Data Acquisition System: Model and Measurements; DZero Trigger Level 1.5 Modeling; Strategies Optimizing Data Load in the DZero Triggers; Simulation of the DZero Level 2 Data Acquisition System; A Fast Method for Calculating DZero Level 1 Jet Trigger Properties and Physics Input to DAQ Studies.

  9. Instrumentation of a Level-1 Track Trigger in the ATLAS detector for the High Luminosity LHC

    CERN Document Server

    Boisvert, V; The ATLAS collaboration

    2012-01-01

    One of the main challenges in particle physics experiments at hadron colliders is to build detector systems that can take advantage of the future luminosity increase that will take place during the next decade. More than 200 simultaneous collisions will be recorded in a single event which will make the task to extract the interesting physics signatures harder than ever before. Not all events can be recorded hence a fast trigger system is required to select events that will be stored for further analysis. In the ATLAS experiment at the Large Hadron Collider (LHC) two different architectures for accommodating a level-1 track trigger are being investigated. The tracker has more readout channels than can be readout in time for the trigger decision. Both architectures aim for a data reduction of 10-100 in order to make readout of data possible in time for a level-1 trigger decision. In the first architecture the data reduction is achieved by reading out only parts of the detector seeded by a high rate pre-trigger ...

  10. An Overview of the ATLAS High Level Trigger Dataflow and Supervision

    CERN Document Server

    Wheeler, S; Baines, J T M; Bee, C P; Biglietti, M; Bogaerts, A; Boisvert, V; Bosman, M; Brandt, S; Caron, B; Casado, M P; Cataldi, G; Cavalli, D; Cervetto, M; Comune, G; Corso-Radu, A; Di Mattia, A; Díaz-Gómez, M; Dos Anjos, A; Drohan, J; Ellis, Nick; Elsing, M; Epp, B; Etienne, F; Falciano, S; Farilla, A; George, S; Ghete, V M; González, S; Grothe, M; Kaczmarska, A; Karr, K M; Khomich, A; Konstantinidis, N P; Krasny, W; Li, W; Lowe, A; Luminari, L; Meessen, C; Mello, A G; Merino, G; Morettini, P; Moyse, E; Nairz, A; Negri, A; Nikitin, N V; Nisati, A; Padilla, C; Parodi, F; Pérez-Réale, V; Pinfold, J L; Pinto, P; Polesello, G; Qian, Z; Resconi, S; Rosati, S; Scannicchio, D A; Schiavi, C; Schörner-Sadenius, T; Segura, E; De Seixas, J M; Shears, T G; Sivoklokov, S Yu; Smizanska, M; Soluk, R A; Stanescu, C; Tapprogge, Stefan; Touchard, F; Vercesi, V; Watson, A; Wengler, T; Werner, P; Wickens, F J; Wiedenmann, W; Wielers, M; Zobernig, G; RT 2003 13th IEEE-NPSS Real Time Conference

    2004-01-01

    The ATLAS High Level Trigger (HLT) system provides software-based event selection after the initial LVL1 hardware trigger. It is composed of two stages, the LVL2 trigger and the Event Filter (EF). The LVL2 trigger performs event selection with optimized algorithms using selected data guided by Region of Interest pointers provided by the LVL1 trigger. Those events selected by LVL2, are built into complete events, which are passed to the EF for a further stage of event selection and classification using off-line algorithms. Events surviving the EF selection are passed for off-line storage. The two stages of HLT are implemented on processor farms. The concept of distributing the selection process between LVL2 and EF is a key element in the architecture, which allows it to be flexible to changes (luminosity, detector knowledge, background conditions etc.) Although there are some differences in the requirements between these sub-systems there are many commonalities. An overview of the dataflow (event selection) an...

  11. Using MaxCompiler for High Level Synthesis of Trigger Algorithms

    CERN Document Server

    Summers, Sioni Paris; Sanders, P.

    2017-01-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  12. Using MaxCompiler for the high level synthesis of trigger algorithms

    International Nuclear Information System (INIS)

    Summers, S.; Rose, A.; Sanders, P.

    2017-01-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  13. Using MaxCompiler for the high level synthesis of trigger algorithms

    Science.gov (United States)

    Summers, S.; Rose, A.; Sanders, P.

    2017-02-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  14. High frame rate retrospectively triggered Cine MRI for assessment of murine diastolic function

    NARCIS (Netherlands)

    Coolen, Bram F.; Abdurrachim, Desiree; Motaal, Abdallah G.; Nicolay, Klaas; Prompers, Jeanine J.; Strijkers, Gustav J.

    2013-01-01

    To assess left ventricular (LV) diastolic function in mice with Cine MRI, a high frame rate (>60 frames per cardiac cycle) is required. For conventional electrocardiography-triggered Cine MRI, the frame rate is inversely proportional to the pulse repetition time (TR). However, TR cannot be lowered

  15. Real-time TPC analysis with the ALICE High-Level Trigger

    International Nuclear Information System (INIS)

    Lindenstruth, V.; Loizides, C.; Roehrich, D.; Skaali, B.; Steinbeck, T.; Stock, R.; Tilsner, H.; Ullaland, K.; Vestboe, A.; Vik, T.

    2004-01-01

    The ALICE High-Level Trigger processes data online, to either select interesting (sub-) events, or to compress data efficiently by modeling techniques. Focusing on the main data source, the Time Projection Chamber, the architecture of the system and the current state of the tracking and compression methods are outlined

  16. Memorial Hermann: high reliability from board to bedside.

    Science.gov (United States)

    Shabot, M Michael; Monroe, Douglas; Inurria, Juan; Garbade, Debbi; France, Anne-Claire

    2013-06-01

    In 2006 the Memorial Hermann Health System (MHHS), which includes 12 hospitals, began applying principles embraced by high reliability organizations (HROs). Three factors support its HRO journey: (1) aligned organizational structure with transparent management systems and compressed reporting processes; (2) Robust Process Improvement (RPI) with high-reliability interventions; and (3) cultural establishment, sustainment, and evolution. The Quality and Safety strategic plan contains three domains, each with a specific set of measures that provide goals for performance: (1) "Clinical Excellence;" (2) "Do No Harm;" and (3) "Saving Lives," as measured by the Serious Safety Event rate. MHHS uses a uniform approach to performance improvement--RPI, which includes Six Sigma, Lean, and change management, to solve difficult safety and quality problems. The 9 acute care hospitals provide multiple opportunities to integrate high-reliability interventions and best practices across MHHS. For example, MHHS partnered with the Joint Commission Center for Transforming Healthcare in its inaugural project to establish reliable hand hygiene behaviors, which improved MHHS's average hand hygiene compliance rate from 44% to 92% currently. Soon after compliance exceeded 85% at all 12 hospitals, the average rate of central line-associated bloodstream and ventilator-associated pneumonias decreased to essentially zero. MHHS's size and diversity require a disciplined approach to performance improvement and systemwide achievement of measurable success. The most significant cultural change at MHHS has been the expectation for 100% compliance with evidence-based quality measures and 0% incidence of patient harm.

  17. Challenges of front-end and triggering electronics for High Granularity Calorimetry

    CERN Document Server

    Puljak, Ivica

    2017-01-01

    A high granularity calorimeter is presently being designed by the CMS Collaboration to replace the existing endcap detectors. It must be able to cope with the very high collision rates, imposing the development of novel filtering and triggering strategies, as well as with the harsh radiation environment of the high-luminosity LHC. In this paper we present an overview of the full electronics architecture and the performance of prototype components and algorithms.

  18. High-Reliable PLC RTOS Development and RPS Structure Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, H. S.; Song, D. Y.; Sohn, D. S.; Kim, J. H. [Enersys Co., Daejeon (Korea, Republic of)

    2008-04-15

    One of the KNICS objectives is to develop a platform for Nuclear Power Plant(NPP) I and C(Instrumentation and Control) system, especially plant protection system. The developed platform is POSAFE-Q and this work supports the development of POSAFE-Q with the development of high-reliable real-time operating system(RTOS) and programmable logic device(PLD) software. Another KNICS objective is to develop safety I and C systems, such as Reactor Protection System(RPS) and Engineered Safety Feature-Component Control System(ESF-CCS). This work plays an important role in the structure analysis for RPS. Validation and verification(V and V) of the safety critical software is an essential work to make digital plant protection system highly reliable and safe. Generally, the reliability and safety of software based system can be improved by strict quality assurance framework including the software development itself. In other words, through V and V, the reliability and safety of a system can be improved and the development activities like software requirement specification, software design specification, component tests, integration tests, and system tests shall be appropriately documented for V and V.

  19. High-Reliable PLC RTOS Development and RPS Structure Analysis

    International Nuclear Information System (INIS)

    Sohn, H. S.; Song, D. Y.; Sohn, D. S.; Kim, J. H.

    2008-04-01

    One of the KNICS objectives is to develop a platform for Nuclear Power Plant(NPP) I and C(Instrumentation and Control) system, especially plant protection system. The developed platform is POSAFE-Q and this work supports the development of POSAFE-Q with the development of high-reliable real-time operating system(RTOS) and programmable logic device(PLD) software. Another KNICS objective is to develop safety I and C systems, such as Reactor Protection System(RPS) and Engineered Safety Feature-Component Control System(ESF-CCS). This work plays an important role in the structure analysis for RPS. Validation and verification(V and V) of the safety critical software is an essential work to make digital plant protection system highly reliable and safe. Generally, the reliability and safety of software based system can be improved by strict quality assurance framework including the software development itself. In other words, through V and V, the reliability and safety of a system can be improved and the development activities like software requirement specification, software design specification, component tests, integration tests, and system tests shall be appropriately documented for V and V.

  20. Validation and Test-Retest Reliability of New Thermographic Technique Called Thermovision Technique of Dry Needling for Gluteus Minimus Trigger Points in Sciatica Subjects and TrPs-Negative Healthy Volunteers

    Science.gov (United States)

    Rychlik, Michał; Samborski, Włodzimierz

    2015-01-01

    The aim of this study was to assess the validity and test-retest reliability of Thermovision Technique of Dry Needling (TTDN) for the gluteus minimus muscle. TTDN is a new thermography approach used to support trigger points (TrPs) diagnostic criteria by presence of short-term vasomotor reactions occurring in the area where TrPs refer pain. Method. Thirty chronic sciatica patients (n=15 TrP-positive and n=15 TrPs-negative) and 15 healthy volunteers were evaluated by TTDN three times during two consecutive days based on TrPs of the gluteus minimus muscle confirmed additionally by referred pain presence. TTDN employs average temperature (T avr), maximum temperature (T max), low/high isothermal-area, and autonomic referred pain phenomenon (AURP) that reflects vasodilatation/vasoconstriction. Validity and test-retest reliability were assessed concurrently. Results. Two components of TTDN validity and reliability, T avr and AURP, had almost perfect agreement according to κ (e.g., thigh: 0.880 and 0.938; calf: 0.902 and 0.956, resp.). The sensitivity for T avr, T max, AURP, and high isothermal-area was 100% for everyone, but specificity of 100% was for T avr and AURP only. Conclusion. TTDN is a valid and reliable method for T avr and AURP measurement to support TrPs diagnostic criteria for the gluteus minimus muscle when digitally evoked referred pain pattern is present. PMID:26137486

  1. Development of the ATLAS High-Level Trigger Steering and Inclusive Searches for Supersymmetry

    CERN Document Server

    Eifert, T

    2009-01-01

    The presented thesis is divided into two distinct parts. The subject of the first part is the ATLAS high-level trigger (HLT), in particular the development of the HLT Steering, and the trigger user-interface. The second part presents a study of inclusive supersymmetry searches, including a novel background estimation method for the relevant Standard Model (SM) processes. The trigger system of the ATLAS experiment at the Large Hadron Collider (LHC) performs the on-line physics selection in three stages: level-1 (LVL1), level-2 (LVL2), and the event filter (EF). LVL2 and EF together form the HLT. The HLT receives events containing detector data from high-energy proton (or heavy ion) collisions, which pass the LVL1 selection at a maximum rate of 75 kHz. It must reduce this rate to ~200 Hz, while retaining the most interesting physics. The HLT is a software trigger and runs on a large computing farm. At the heart of the HLT is the Steering software. The HLT Steering must reach a decision whether or not to accept ...

  2. Online measurement of LHC beam parameters with the ATLAS High Level Trigger

    International Nuclear Information System (INIS)

    Strauss, E

    2012-01-01

    We present an online measurement of the LHC beamspot parameters in ATLAS using the High Level Trigger (HLT). When a significant change is detected in the measured beamspot, it is distributed to the HLT. There, trigger algorithms like b-tagging which calculate impact parameters or decay lengths benefit from a precise, up-to-date set of beamspot parameters. Additionally, online feedback is sent to the LHC operators in real time. The measurement is performed by an algorithm running on the Level 2 trigger farm, leveraging the high rate of usable events. Dedicated algorithms perform a full scan of the silicon detector to reconstruct event vertices from registered tracks. The distribution of these vertices is aggregated across the farm and their shape is extracted through fits every 60 seconds to determine the beamspot position, size, and tilt. The reconstructed beamspot values are corrected for detector resolution effects, measured in situ using the separation of vertices whose tracks have been split into two collections. Furthermore, measurements for individual bunch crossings have allowed for studies of single-bunch distributions as well as the behavior of bunch trains. This talk will cover the constraints imposed by the online environment and describe how these measurements are accomplished with the given resources. The algorithm tasks must be completed within the time constraints of the Level 2 trigger, with limited CPU and bandwidth allocations. This places an emphasis on efficient algorithm design and the minimization of data requests.

  3. Assessing high reliability via Bayesian approach and accelerated tests

    International Nuclear Information System (INIS)

    Erto, Pasquale; Giorgio, Massimiliano

    2002-01-01

    Sometimes the assessment of very high reliability levels is difficult for the following main reasons: - the high reliability level of each item makes it impossible to obtain, in a reasonably short time, a sufficient number of failures; - the high cost of the high reliability items to submit to life tests makes it unfeasible to collect enough data for 'classical' statistical analyses. In the above context, this paper presents a Bayesian solution to the problem of estimation of the parameters of the Weibull-inverse power law model, on the basis of a limited number (say six) of life tests, carried out at different stress levels, all higher than the normal one. The over-stressed (i.e. accelerated) tests allow the use of experimental data obtained in a reasonably short time. The Bayesian approach enables one to reduce the required number of failures adding to the failure information the available a priori engineers' knowledge. This engineers' involvement conforms to the most advanced management policy that aims at involving everyone's commitment in order to obtain total quality. A Monte Carlo study of the non-asymptotic properties of the proposed estimators and a comparison with the properties of maximum likelihood estimators closes the work

  4. Designing reliability into high-effectiveness industrial gas turbine regenerators

    International Nuclear Information System (INIS)

    Valentino, S.J.

    1979-01-01

    The paper addresses the measures necessary to achieve a reliable regenerator design that can withstand higher temperatures (1000-1200 F) and many start and stop cycles - conditions encountered in high-efficiency operation in pipeline applications. The discussion is limited to three major areas: (1) structural analysis of the heat exchanger core - the part of the regenerator that must withstand the higher temperatures and cyclic duty (2) materials data and material selection and (3) a comprehensive test program to demonstrate the reliability of the regenerator. This program includes life-cycle tests, pressure containment in fin panels, core-to-core joint structural test, bellows pressure containment test, sliding pad test, core gas-side passage flow distribution test, and production test. Today's regenerators must have high cyclic life capability, stainless steel construction, and long fault-free service life of 120,000 hr

  5. Reliability of high power electron accelerators for radiation processing

    International Nuclear Information System (INIS)

    Zimek, Z.

    2011-01-01

    Accelerators applied for radiation processing are installed in industrial facilities where accelerator availability coefficient should be at the level of 95% to fulfill requirements according to industry standards. Usually the exploitation of electron accelerator reviles the number of short and few long lasting failures. Some technical shortages can be overcome by practical implementation the experience gained in accelerator technology development by different accelerator manufactures. The reliability/availability of high power accelerators for application in flue gas treatment process must be dramatically improved to meet industrial standards. Support of accelerator technology dedicated for environment protection should be provided by governmental and international institutions to overcome accelerator reliability/availability problem and high risk and low direct profit in this particular application. (author)

  6. Reliability of high power electron accelerators for radiation processing

    Energy Technology Data Exchange (ETDEWEB)

    Zimek, Z. [Department of Radiation Chemistry and Technology, Institute of Nuclear Chemistry and Technology, Warsaw (Poland)

    2011-07-01

    Accelerators applied for radiation processing are installed in industrial facilities where accelerator availability coefficient should be at the level of 95% to fulfill requirements according to industry standards. Usually the exploitation of electron accelerator reviles the number of short and few long lasting failures. Some technical shortages can be overcome by practical implementation the experience gained in accelerator technology development by different accelerator manufactures. The reliability/availability of high power accelerators for application in flue gas treatment process must be dramatically improved to meet industrial standards. Support of accelerator technology dedicated for environment protection should be provided by governmental and international institutions to overcome accelerator reliability/availability problem and high risk and low direct profit in this particular application. (author)

  7. Survey of industry methods for producing highly reliable software

    International Nuclear Information System (INIS)

    Lawrence, J.D.; Persons, W.L.

    1994-11-01

    The Nuclear Reactor Regulation Office of the US Nuclear Regulatory Commission is charged with assessing the safety of new instrument and control designs for nuclear power plants which may use computer-based reactor protection systems. Lawrence Livermore National Laboratory has evaluated the latest techniques in software reliability for measurement, estimation, error detection, and prediction that can be used during the software life cycle as a means of risk assessment for reactor protection systems. One aspect of this task has been a survey of the software industry to collect information to help identify the design factors used to improve the reliability and safety of software. The intent was to discover what practices really work in industry and what design factors are used by industry to achieve highly reliable software. The results of the survey are documented in this report. Three companies participated in the survey: Computer Sciences Corporation, International Business Machines (Federal Systems Company), and TRW. Discussions were also held with NASA Software Engineering Lab/University of Maryland/CSC, and the AIAA Software Reliability Project

  8. Gearbox Reliability Collaborative High-Speed Shaft Calibration

    Energy Technology Data Exchange (ETDEWEB)

    Keller, J.; McNiff, B.

    2014-09-01

    Instrumentation has been added to the high-speed shaft, pinion, and tapered roller bearing pair of the Gearbox Reliability Collaborative gearbox to measure loads and temperatures. The new shaft bending moment and torque instrumentation was calibrated and the purpose of this document is to describe this calibration process and results, such that the raw shaft bending and torque signals can be converted to the proper engineering units and coordinate system reference for comparison to design loads and simulation model predictions.

  9. Achieving High Reliability Operations Through Multi-Program Integration

    Energy Technology Data Exchange (ETDEWEB)

    Holly M. Ashley; Ronald K. Farris; Robert E. Richards

    2009-04-01

    Over the last 20 years the Idaho National Laboratory (INL) has adopted a number of operations and safety-related programs which has each periodically taken its turn in the limelight. As new programs have come along there has been natural competition for resources, focus and commitment. In the last few years, the INL has made real progress in integrating all these programs and are starting to realize important synergies. Contributing to this integration are both collaborative individuals and an emerging shared vision and goal of the INL fully maturing in its high reliability operations. This goal is so powerful because the concept of high reliability operations (and the resulting organizations) is a masterful amalgam and orchestrator of the best of all the participating programs (i.e. conduct of operations, behavior based safety, human performance, voluntary protection, quality assurance, and integrated safety management). This paper is a brief recounting of the lessons learned, thus far, at the INL in bringing previously competing programs into harmony under the goal (umbrella) of seeking to perform regularly as a high reliability organization. In addition to a brief diagram-illustrated historical review, the authors will share the INL’s primary successes (things already effectively stopped or started) and the gaps yet to be bridged.

  10. ELM triggering by energetic particle driven mode in wall-stabilized high-β plasmas

    International Nuclear Information System (INIS)

    Matsunaga, G.; Aiba, N.; Shinohara, K.; Asakura, N.; Isayama, A.; Oyama, N.

    2013-01-01

    In the JT-60U high-β plasmas above the no-wall β limit, a triggering of an edge localized mode (ELM) by an energetic particle (EP)-driven mode has been observed. This EP-driven mode is thought to be driven by trapped EPs and it has been named EP-driven wall mode (EWM) on JT-60U (Matsunaga et al 2009 Phys. Rev. Lett. 103 045001). When the EWM appears in an ELMy H-mode phase, ELM crashes are reproducibly synchronized with the EWM bursts. The EWM-triggered ELM has a higher repetition frequency and less energy loss than those of the natural ELM. In order to trigger an ELM by the EP-driven mode, some conditions are thought to be needed, thus an EWM with large amplitude and growth rate, and marginal edge stability. In the scrape-off layer region, several measurements indicate an ion loss induced by the EWM. The ion transport is considered as the EP transport through the edge region. From these observations, the EP contributions to edge stability are discussed as one of the ELM triggering mechanisms. (paper)

  11. Study of data on the associated momentum on the trigger side in high p hadron production

    International Nuclear Information System (INIS)

    Alonso, J.L.; Antolin, J.; Azeoiti, V.; Bravo, J.R.; Cruz, A.; Zaragoza Univ.

    1980-01-01

    The British-French Scandinavian collaboration has recently studied the non trigger charged mean momentum in different rapidity regions on the trigger hemisphere, (psub(x)), in the collision of two hadrons at the CERN Intersecting Storing Rings (ISR). In particular, they give for the rapidity regions y < 0,5 and y < 1 the values of the slope, α, of (psub(x)) with the trigger momentum psup(t)sub(T). Several authors have analysed those values of α in the framework of hard scattering models which predict values independent of psup(t)sub(T) for (zsub(c)), the longitudinal momentum fraction of the outgoing hard scattered system taken by the trigger. From this analysis they give estimates of (zsub(c)) of very difficult reconcilliation with those calculated in the Feynman, Field and Fox hard scattering model or in the QCD treatment of high psub(T) hardon production. The authors of the present paper have looked for, and found, other data whose model independent analysis in more feasible than that of the data mentioned above. More specifically, we analyse in the framework of the hard scattering models, but otherwise model independently, data on (psub(x)) in two other rapidity regions ( y < 3, 2 < y < 3) and find that consistence of the average slopes, α, in these two regions is only achieved with mean values of (zsub(c)) significantly increasing with psup(t)sub(T) and close in value to those obtained by Feynman et al. (orig.)

  12. Commissioning of the ATLAS high-level trigger with single beam and cosmic rays

    CERN Document Server

    Özcan, V Erkcan

    2010-01-01

    ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). Using fast reconstruction algorithms, its trigger system needs to efficiently reject a huge rate of background events and still select potentially interesting ones with good efficiency. After a first processing level using custom electronics, the trigger selection is made by software running on two processor farms, designed to have a total of around two thousand multi-core machines. This system is known as the High Level Trigger (HLT). To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection of background events. The recent LHC startup and short single-beam run provided a "stress test" of the trigger. Following this period, ATLAS continued to collect cosmic-ray events for detector alignment and calibration purposes. These running periods allowed strict tests of the HLT reconstruction and selection algorithms as we...

  13. Reliability studies of high operating temperature MCT photoconductor detectors

    Science.gov (United States)

    Wang, Wei; Xu, Jintong; Zhang, Yan; Li, Xiangyang

    2010-10-01

    This paper concerns HgCdTe (MCT) infrared photoconductor detectors with high operating temperature. The near room temperature operation of detectors have advantages of light weight, less cost and convenient usage. Their performances are modest and they suffer from reliable problems. These detectors face with stability of the package, chip bonding area and passivation layers. It's important to evaluate and improve the reliability of such detectors. Defective detectors were studied with SEM(Scanning electron microscope) and microscopy. Statistically significant differences were observed between the influence of operating temperature and the influence of humidity. It was also found that humility has statistically significant influence upon the stability of the chip bonding and passivation layers, and the amount of humility isn't strongly correlated to the damage on the surface. Considering about the commonly found failures modes in detectors, special test structures were designed to improve the reliability of detectors. An accelerated life test was also implemented to estimate the lifetime of the high operating temperature MCT photoconductor detectors.

  14. Engineering high reliability, low-jitter Marx generators

    International Nuclear Information System (INIS)

    Schneider, L.X.; Lockwood, G.J.

    1985-01-01

    Multimodule pulsed power accelerators typically require high module reliability and nanosecond regime simultaneity between modules. Energy storage using bipolar Marx generators can meet these requirements. Experience gained from computer simulations and the development of the DEMON II Marx generator has led to a fundamental understanding of the operation of these multistage devices. As a result of this research, significant improvements in erection time jitter and reliability have been realized in multistage, bipolar Marx generators. Erection time jitter has been measured as low as 2.5 nanoseconds for the 3.2MV, 16-stage PBFA I Marx and 3.5 nanoseconds for the 6.0MV, 30-stage PBFA II (DEMON II) Marx, while maintaining exceptionally low prefire rates. Performance data are presented from the DEMON II Marx research program, as well as discussions on the use of computer simulations in designing low-jitter Marx generators

  15. Monitoring and Tracking the LHC Beam Spot within the ATLAS High Level Trigger

    CERN Document Server

    Winklmeier, F; The ATLAS collaboration

    2012-01-01

    The parameters of the beam spot produced by the LHC in the ATLAS interaction region are computed online using the ATLAS High Level Trigger (HLT) system. The high rate of triggered events is exploited to make precise measurements of the position, size and orientation of the luminous region in near real-time, as these parameters change significantly even during a single data-taking run. We present the challenges, solutions and results for the online determination, monitoring and beam spot feedback system in ATLAS. A specially designed algorithm, which uses tracks registered in the silicon detectors to reconstruct event vertices, is executed on the HLT processor farm of several thousand CPU cores. Monitoring histograms from all the cores are sampled and aggregated across the farm every 60 seconds. The reconstructed beam values are corrected for detector resolution effects, measured in situ from the separation of vertices whose tracks have been split into two collections. Furthermore, measurements for individual ...

  16. A System for Monitoring and Tracking the LHC Beam Spot within the ATLAS High Level Trigger

    CERN Document Server

    Bartoldus, R; The ATLAS collaboration; Cogan, J; Salnikov, A; Strauss, E; Winklmeier, F

    2012-01-01

    The parameters of the beam spot produced by the LHC in the ATLAS interaction region are computed online using the ATLAS High Level Trigger (HLT) system. The high rate of triggered events is exploited to make precise measurements of the position, size and orientation of the luminous region in near real-time, as these parameters change significantly even during a single data-taking run. We present the challenges, solutions and results for the online determination, monitoring and beam spot feedback system in ATLAS. A specially designed algorithm, which uses tracks registered in the silicon detectors to reconstruct event vertices, is executed on the HLT processor farm of several thousand CPU cores. Monitoring histograms from all the cores are sampled and aggregated across the farm every 60 seconds. The reconstructed beam values are corrected for detector resolution effects, measured in situ from the separation of vertices whose tracks have been split into two collections. Furthermore, measurements for individual ...

  17. The design of a fast Level-1 track trigger for the high luminosity upgrade of ATLAS.

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00413032; The ATLAS collaboration

    2016-01-01

    The high/luminosity upgrade of the LHC will increase the rate of the proton-proton collisions by approximately a factor of 5 with respect to the initial LHC-design. The ATLAS experiment will upgrade consequently, increasing its robustness and selectivity in the expected high radiation environment. In particular, the earliest, hardware based, ATLAS trigger stage ("Level 1") will require higher rejection power, still maintaining efficient selection on many various physics signatures. The key ingredient is the possibility of extracting tracking information from the brand new full-silicon detector and use it for the process. While fascinating, this solution poses a big challenge in the choice of the architecture, due to the reduced latency available at this trigger level (few tens of micro-seconds) and the high expected working rates (order of MHz). In this paper, we review the design possibilities of such a system in a potential new trigger and readout architecture, and present the performance resulting from a d...

  18. Can Pulsed Electromagnetic Fields Trigger On-Demand Drug Release from High-Tm Magnetoliposomes?

    Directory of Open Access Journals (Sweden)

    Martina Nardoni

    2018-03-01

    Full Text Available Recently, magnetic nanoparticles (MNPs have been used to trigger drug release from magnetoliposomes through a magneto-nanomechanical approach, where the mechanical actuation of the MNPs is used to enhance the membrane permeability. This result can be effectively achieved with low intensity non-thermal alternating magnetic field (AMF, which, however, found rare clinic application. Therefore, a different modality of generating non-thermal magnetic fields has now been investigated. Specifically, the ability of the intermittent signals generated by non-thermal pulsed electromagnetic fields (PEMFS were used to verify if, once applied to high-transition temperature magnetoliposomes (high-Tm MLs, they could be able to efficiently trigger the release of a hydrophilic model drug. To this end, hydrophilic MNPs were combined with hydrogenated soybean phosphatidylcholine and cholesterol to design high-Tm MLs. The release of a dye was evaluated under the effect of PEMFs for different times. The MNPs motions produced by PEMF could effectively increase the bilayer permeability, without affecting the liposomes integrity and resulted in nearly 20% of release after 3 h exposure. Therefore, the current contribution provides an exciting proof-of-concept for the ability of PEMFS to trigger drug release, considering that PEMFS find already application in therapy due to their anti-inflammatory effects.

  19. Can Pulsed Electromagnetic Fields Trigger On-Demand Drug Release from High-Tm Magnetoliposomes?

    Science.gov (United States)

    Nardoni, Martina; Della Valle, Elena; Liberti, Micaela; Relucenti, Michela; Casadei, Maria Antonietta; Paolicelli, Patrizia; Apollonio, Francesca; Petralito, Stefania

    2018-03-27

    Recently, magnetic nanoparticles (MNPs) have been used to trigger drug release from magnetoliposomes through a magneto-nanomechanical approach, where the mechanical actuation of the MNPs is used to enhance the membrane permeability. This result can be effectively achieved with low intensity non-thermal alternating magnetic field (AMF), which, however, found rare clinic application. Therefore, a different modality of generating non-thermal magnetic fields has now been investigated. Specifically, the ability of the intermittent signals generated by non-thermal pulsed electromagnetic fields (PEMFS) were used to verify if, once applied to high-transition temperature magnetoliposomes (high-Tm MLs), they could be able to efficiently trigger the release of a hydrophilic model drug. To this end, hydrophilic MNPs were combined with hydrogenated soybean phosphatidylcholine and cholesterol to design high-Tm MLs. The release of a dye was evaluated under the effect of PEMFs for different times. The MNPs motions produced by PEMF could effectively increase the bilayer permeability, without affecting the liposomes integrity and resulted in nearly 20% of release after 3 h exposure. Therefore, the current contribution provides an exciting proof-of-concept for the ability of PEMFS to trigger drug release, considering that PEMFS find already application in therapy due to their anti-inflammatory effects.

  20. Highly-reliable laser diodes and modules for spaceborne applications

    Science.gov (United States)

    Deichsel, E.

    2017-11-01

    Laser applications become more and more interesting in contemporary missions such as earth observations or optical communication in space. One of these applications is light detection and ranging (LIDAR), which comprises huge scientific potential in future missions. The Nd:YAG solid-state laser of such a LIDAR system is optically pumped using 808nm emitting pump sources based on semiconductor laser-diodes in quasi-continuous wave (qcw) operation. Therefore reliable and efficient laser diodes with increased output powers are an important requirement for a spaceborne LIDAR-system. In the past, many tests were performed regarding the performance and life-time of such laser-diodes. There were also studies for spaceborne applications, but a test with long operation times at high powers and statistical relevance is pending. Other applications, such as science packages (e.g. Raman-spectroscopy) on planetary rovers require also reliable high-power light sources. Typically fiber-coupled laser diode modules are used for such applications. Besides high reliability and life-time, designs compatible to the harsh environmental conditions must be taken in account. Mechanical loads, such as shock or strong vibration are expected due to take-off or landing procedures. Many temperature cycles with high change rates and differences must be taken in account due to sun-shadow effects in planetary orbits. Cosmic radiation has strong impact on optical components and must also be taken in account. Last, a hermetic sealing must be considered, since vacuum can have disadvantageous effects on optoelectronics components.

  1. High Reliability Prototype Quadrupole for the Next Linear Collider

    International Nuclear Information System (INIS)

    Spencer, Cherrill M

    2001-01-01

    The Next Linear Collider (NLC) will require over 5600 magnets, each of which must be highly reliable and/or quickly repairable in order that the NLC reach its 85% overall availability goal. A multidiscipline engineering team was assembled at SLAC to develop a more reliable electromagnet design than historically had been achieved at SLAC. This team carried out a Failure Mode and Effects Analysis (FMEA) on a standard SLAC quadrupole magnet system. They overcame a number of longstanding design prejudices, producing 10 major design changes. This paper describes how a prototype magnet was constructed and the extensive testing carried out on it to prove full functionality with an improvement in reliability. The magnet's fabrication cost will be compared to the cost of a magnet with the same requirements made in the historic SLAC way. The NLC will use over 1600 of these 12.7 mm bore quadrupoles with a range of integrated strengths from 0.6 to 132 Tesla, a maximum gradient of 135 Tesla per meter, an adjustment range of 0 to -20% and core lengths from 324 mm to 972 mm. The magnetic center must remain stable to within 1 micron during the 20% adjustment. A magnetic measurement set-up has been developed that can measure sub-micron shifts of a magnetic center. The prototype satisfied the center shift requirement over the full range of integrated strengths

  2. Reliability of force-velocity relationships during deadlift high pull.

    Science.gov (United States)

    Lu, Wei; Boyas, Sébastien; Jubeau, Marc; Rahmani, Abderrahmane

    2017-11-13

    This study aimed to evaluate the within- and between-session reliability of force, velocity and power performances and to assess the force-velocity relationship during the deadlift high pull (DHP). Nine participants performed two identical sessions of DHP with loads ranging from 30 to 70% of body mass. The force was measured by a force plate under the participants' feet. The velocity of the 'body + lifted mass' system was calculated by integrating the acceleration and the power was calculated as the product of force and velocity. The force-velocity relationships were obtained from linear regression of both mean and peak values of force and velocity. The within- and between-session reliability was evaluated by using coefficients of variation (CV) and intraclass correlation coefficients (ICC). Results showed that DHP force-velocity relationships were significantly linear (R² > 0.90, p  0.94), mean and peak velocities showed a good agreement (CV reliable and can therefore be utilised as a tool to characterise individuals' muscular profiles.

  3. Pulsed laser triggered high speed microfluidic fluorescence activated cell sorter†‡

    Science.gov (United States)

    Wu, Ting-Hsiang; Chen, Yue; Park, Sung-Yong; Hong, Jason; Teslaa, Tara; Zhong, Jiang F.; Di Carlo, Dino; Teitell, Michael A.

    2014-01-01

    We report a high speed and high purity pulsed laser triggered fluorescence activated cell sorter (PLACS) with a sorting throughput up to 20 000 mammalian cells s−1 with 37% sorting purity, 90% cell viability in enrichment mode, and >90% purity in high purity mode at 1500 cells s−1 or 3000 beads s−1. Fast switching (30 μs) and a small perturbation volume (~90 pL) is achieved by a unique sorting mechanism in which explosive vapor bubbles are generated using focused laser pulses in a single layer microfluidic PDMS channel. PMID:22361780

  4. Offshore compression system design for low cost high and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Castro, Carlos J. Rocha de O.; Carrijo Neto, Antonio Dias; Cordeiro, Alexandre Franca [Chemtech Engineering Services and Software Ltd., Rio de Janeiro, RJ (Brazil). Special Projects Div.], Emails: antonio.carrijo@chemtech.com.br, carlos.rocha@chemtech.com.br, alexandre.cordeiro@chemtech.com.br

    2010-07-01

    In the offshore oil fields, the oil streams coming from the wells usually have significant amounts of gas. This gas is separated at low pressure and has to be compressed to the export pipeline pressure, usually at high pressure to reduce the needed diameter of the pipelines. In the past, this gases where flared, but nowadays there are a increasing pressure for the energy efficiency improvement of the oil rigs and the use of this gaseous fraction. The most expensive equipment of this kind of plant are the compression and power generation systems, being the second a strong function of the first, because the most power consuming equipment are the compressors. For this reason, the optimization of the compression system in terms of efficiency and cost are determinant to the plant profit. The availability of the plants also have a strong influence in the plant profit, specially in gas fields where the products have a relatively low aggregated value, compared to oil. Due this, the third design variable of the compression system becomes the reliability. As high the reliability, larger will be the plant production. The main ways to improve the reliability of compression system are the use of multiple compression trains in parallel, in a 2x50% or 3x50% configuration, with one in stand-by. Such configurations are possible and have some advantages and disadvantages, but the main side effect is the increase of the cost. This is the offshore common practice, but that does not always significantly improve the plant availability, depending of the previous process system. A series arrangement and a critical evaluation of the overall system in some cases can provide a cheaper system with equal or better performance. This paper shows a case study of the procedure to evaluate a compression system design to improve the reliability but without extreme cost increase, balancing the number of equipment, the series or parallel arrangement, and the driver selection. Two cases studies will be

  5. High-Resolution Phenotypic Landscape of the RNA Polymerase II Trigger Loop.

    Directory of Open Access Journals (Sweden)

    Chenxi Qiu

    2016-11-01

    Full Text Available The active sites of multisubunit RNA polymerases have a "trigger loop" (TL that multitasks in substrate selection, catalysis, and translocation. To dissect the Saccharomyces cerevisiae RNA polymerase II TL at individual-residue resolution, we quantitatively phenotyped nearly all TL single variants en masse. Three mutant classes, revealed by phenotypes linked to transcription defects or various stresses, have distinct distributions among TL residues. We find that mutations disrupting an intra-TL hydrophobic pocket, proposed to provide a mechanism for substrate-triggered TL folding through destabilization of a catalytically inactive TL state, confer phenotypes consistent with pocket disruption and increased catalysis. Furthermore, allele-specific genetic interactions among TL and TL-proximal domain residues support the contribution of the funnel and bridge helices (BH to TL dynamics. Our structural genetics approach incorporates structural and phenotypic data for high-resolution dissection of transcription mechanisms and their evolution, and is readily applicable to other essential yeast proteins.

  6. Real Time Global Tests of the ALICE High Level Trigger Data Transport Framework

    CERN Document Server

    Becker, B.; Cicalo J.; Cleymans, C.; de Vaux, G.; Fearick, R.W.; Lindenstruth, V.; Richter, M.; Rorich, D.; Staley, F.; Steinbeck, T.M.; Szostak, A.; Tilsner, H.; Weis, R.; Vilakazi, Z.Z.

    2008-01-01

    The High Level Trigger (HLT) system of the ALICE experiment is an online event filter and trigger system designed for input bandwidths of up to 25 GB/s at event rates of up to 1 kHz. The system is designed as a scalable PC cluster, implementing several hundred nodes. The transport of data in the system is handled by an object-oriented data flow framework operating on the basis of the publisher-subscriber principle, being designed fully pipelined with lowest processing overhead and communication latency in the cluster. In this paper, we report the latest measurements where this framework has been operated on five different sites over a global north-south link extending more than 10,000 km, processing a ``real-time'' data flow.

  7. Electrons and photons at High Level Trigger in CMS for Run II

    CERN Document Server

    Bin Anuar, Afiq Aizuddin

    2015-01-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increase in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. New approaches have been studied to keep the HLT output rate manageable while maintaining thresholds low enough to cover physics analyses. The strategy mainly relies on porting online the ingredients that have been successfully applied in the offline reconstruction, thus allowing to move HLT selection closer to offline cuts. Improvements in HLT electron and photon definitions will be presented, focusing in particular on updated clustering algorithm and the energy calibration procedure, new Particle-Flow-based isolation approach and pileup mitigation techniques, a...

  8. Online Measurement of LHC Beam Parameters with the ATLAS High Level Trigger

    CERN Document Server

    Strauss, E; The ATLAS collaboration

    2011-01-01

    We present an online measurement of the LHC beam parameters in ATLAS using the High Level Trigger (HLT). When a significant change is detected in the measured beamspot, it is distributed to the HLT. There, trigger algorithms like b-tagging which calculate impact parameters or decay lengths benefit from a precise, up-to-date set of beamspot parameters. Additionally, online feedback is sent to the LHC operators in real time. The measurement is performed by an algorithm running on the Level 2 trigger farm, leveraging the high rate of usable events. Dedicated algorithms perform a full scan of the silicon detector to reconstruct event vertices from registered tracks. The distribution of these vertices is aggregated across the farm and their shape is extracted through fits every 60 seconds to determine the beamspot position, size, and tilt. The reconstructed beam values are corrected for detector resolution effects, measured in situ using the separation of vertices whose tracks have been split into two collections....

  9. Online measurement of LHC beam parameters with the ATLAS High Level Trigger

    CERN Document Server

    Strauss, E; The ATLAS collaboration

    2011-01-01

    We present an online measurement of the LHC beam parameters in ATLAS using the High Level Trigger (HLT). When a significant change is detected in the measured beamspot, it is distributed to the HLT. There, trigger algorithms like b-tagging which calculate impact parameters or decay lengths benefit from a precise,up-to-date set of beamspot parameters. Additionally, online feedback is sent to the LHC operators in real time. The measurement is performed by an algorithm running on the Level 2 trigger farm, leveraging the high rate of usable events. Dedicated algorithms perform a full scan of the silicon detector to reconstruct event vertices from registered tracks. The distribution of these vertices is aggregated across the farm and their shape is extracted through fits every 60 seconds to determine the beamspot position, size, and tilt. The reconstructed beam values are corrected for detector resolution effects, measured in situ using the separation of vertices whose tracks have been split into two collections. ...

  10. Analysis of fatigue reliability for high temperature and high pressure multi-stage decompression control valve

    Science.gov (United States)

    Yu, Long; Xu, Juanjuan; Zhang, Lifang; Xu, Xiaogang

    2018-03-01

    Based on stress-strength interference theory to establish the reliability mathematical model for high temperature and high pressure multi-stage decompression control valve (HMDCV), and introduced to the temperature correction coefficient for revising material fatigue limit at high temperature. Reliability of key dangerous components and fatigue sensitivity curve of each component are calculated and analyzed by the means, which are analyzed the fatigue life of control valve and combined with reliability theory of control valve model. The impact proportion of each component on the control valve system fatigue failure was obtained. The results is shown that temperature correction factor makes the theoretical calculations of reliability more accurate, prediction life expectancy of main pressure parts accords with the technical requirements, and valve body and the sleeve have obvious influence on control system reliability, the stress concentration in key part of control valve can be reduced in the design process by improving structure.

  11. Reliability engineering for nuclear and other high technology systems

    International Nuclear Information System (INIS)

    Lakner, A.A.; Anderson, R.T.

    1985-01-01

    This book is written for the reliability instructor, program manager, system engineer, design engineer, reliability engineer, nuclear regulator, probability risk assessment (PRA) analyst, general manager and others who are involved in system hardware acquisition, design and operation and are concerned with plant safety and operational cost-effectiveness. It provides criteria, guidelines and comprehensive engineering data affecting reliability; it covers the key aspects of system reliability as it relates to conceptual planning, cost tradeoff decisions, specification, contractor selection, design, test and plant acceptance and operation. It treats reliability as an integrated methodology, explicitly describing life cycle management techniques as well as the basic elements of a total hardware development program, including: reliability parameters and design improvement attributes, reliability testing, reliability engineering and control. It describes how these elements can be defined during procurement, and implemented during design and development to yield reliable equipment. (author)

  12. Recent experience and future evolution of the CMS High Level Trigger System

    CERN Document Server

    Bauer, Gerry; Branson, James; Bukowiec, Sebastian Czeslaw; Chaze, Olivier; Cittolin, Sergio; Coarasa Perez, Jose Antonio; Deldicque, Christian; Dobson, Marc; Dupont, Aymeric; Erhan, Samim; Gigi, Dominique; Glege, Frank; Gomez-Reino Garrido, Robert; Hartl, Christian; Holzner, Andre Georg; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Nunez Barranco Fernandez, Carlos; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrucci, Andrea; Pieri, Marco; Polese, Giovanni; Racz, Attila; Raginel, Olivier; Sakulin, Hannes; Sani, Matteo; Schwick, Christoph; Spataru, Andrei Cristian; Stoeckli, Fabian; Sumorok, Konstanty

    2012-01-01

    The CMS experiment at the LHC uses a two-stage trigger system, with events flowing from the first level trigger at a rate of 100 kHz. These events are read out by the Data Acquisition system (DAQ), assembled in memory in a farm of computers, and finally fed into the high-level trigger (HLT) software running on the farm. The HLT software selects interesting events for offline storage and analysis at a rate of a few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the 2010-2011 collider run is detailed, as well as the current architecture of the CMS HLT, and its integration with the CMS reconstruction framework and CMS DAQ. The short- and medium-term evolution of the HLT software infrastructure is discussed, with future improvements aimed at supporting extensions of the HLT computing power, and addressing remaining performance and maintenance issues.

  13. High-voltage Pulse-triggered SR Latch Level-Shifter Design Considerations

    DEFF Research Database (Denmark)

    Larsen, Dennis Øland; Llimos Muntal, Pere; Jørgensen, Ivan Harald Holger

    2014-01-01

    translating a signal from 0- 3 : 3 V to 87 : 5 - 100 V. The operation of this level-shifter is verified with measurements on a fabricated chip. The shortcomings of the implemented level-shifter in terms of power dissipation, transition delay, area, and startup behavior are then considered and an improved......This paper compares pulse-triggered level shifters with a traditional level-triggered topology for high-voltage ap- plications with supply voltages in the 50 V to 100 V range. It is found that the pulse-triggered SR (Set/Reset) latch level- shifter has a superior power consumption of 1800 W = MHz...... circuit is suggested which has been designed in three variants being able to translate the low-voltage 0- 3 : 3 V signal to 45 - 50 V, 85 - 90 V, and 95 - 100 V respectively. The improved 95 - 100 V level shifter achieves a considerably lower power consumption of 438 W = MHz along with a significantly...

  14. A new high speed, Ultrascale+ based board for the ATLAS jet calorimeter trigger system

    CERN Document Server

    Rocco, Elena; The ATLAS collaboration

    2018-01-01

    A new high speed Ultrascale+ based board for the ATLAS jet calorimeter trigger system To cope with the enhanced luminosity at the Large Hadron Collider (LHC) in 2021, the ATLAS collaboration is planning a major detector upgrade. As a part of this, the Level 1 trigger based on calorimeter data will be upgraded to exploit the fine granularity readout using a new system of Feature EXtractors (FEX), which each reconstruct different physics objects for the trigger selection. The jet FEX (jFEX) system is conceived to provide jet identification (including large area jets) and measurements of global variables within a latency budget of less then 400ns. It consists of 6 modules. A single jFEX module is an ATCA board with 4 large FPGAs of the Xilinx Ultrascale+ family, that can digest a total input data rate of ~3.6 Tb/s using up to 120 Multi Gigabit Transceiver (MGT), 24 electrical optical devices, board control and power on the mezzanines to allow flexibility in upgrading controls functions and components without aff...

  15. ATLAS High-Level Trigger Performance for Calorimeter-Based Algorithms in LHC Run-I

    CERN Document Server

    Mann, A; The ATLAS collaboration

    2013-01-01

    The ATLAS detector operated during the three years of the Run-I of the Large Hadron Collider collecting information on a large number of proton-proton events. One the most important results obtained so far is the discovery of one Higgs boson. More precise measurements of this particle must be performed as well as there are other very important physics topics still to be explored. One of the key components of the ATLAS detector is its trigger system. It is composed of three levels: one (called Level 1 - L1) built on custom hardware and the two others based on software algorithms - called Level 2 (L2) and Event Filter (EF) – altogether referred to as the ATLAS High Level Trigger. The ATLAS trigger is responsible for reducing almost 20 million of collisions per second produced by the accelerator to less than 1000. The L2 operates only in the regions tagged by the first hardware level as containing possible interesting physics while the EF operates in the full detector, normally using offline-like algorithms to...

  16. A read-out buffer prototype for ATLAS high level triggers

    CERN Document Server

    Calvet, D; Huet, M; Le Dû, P; Mandjavidze, I D; Mur, M

    2000-01-01

    Read-Out Buffers are critical components in the dataflow chain of the ATLAS Trigger/DAQ system. At up to 75 kHz, after each Level-1 trigger accept signal, these devices receive and store digitized data from groups of front-end electronic channels. Several Read-Out Buffers are grouped to form a Read-Out Buffer Complex that acts as a data server for the High Level Triggers selection algorithms and for the final data collection system. This paper describes a functional prototype of a Read-Out Buffer based on a custom made PCI mezzanine card that is designed to accept input data at up to 160 MB/s, to store up to 8 MB of data and to distribute data chunks at the desired request rate. We describe the hardware of the card that is based on an Intel I960 processor and CPLDs. We present the integration of several of these cards in a Read-Out Buffer Complex. We measure various performance figures and we discuss to which extent these can fulfill ATLAS needs. 5 Refs.

  17. Self-triggered image intensifier tube for high-resolution UHECR imaging detector

    CERN Document Server

    Sasaki, M; Jobashi, M

    2003-01-01

    The authors have developed a self-triggered image intensifier tube with high-resolution imaging capability. An image detected by a first image intensifier tube as an electrostatic lens with a photocathode diameter of 100 mm is separated by a half-mirror into a path for CCD readout (768x494 pixels) and a fast control to recognize and trigger the image. The proposed system provides both a high signal-to-noise ratio to improve single photoelectron detection and excellent spatial resolution between 207 and 240 mu m rendering this device a potentially essential tool for high-energy physics and astrophysics experiments, as well as high-speed photography. When combined with a 1-arcmin resolution optical system with 50 deg. field-of-view proposed by the present authors, the observation of ultra high-energy cosmic rays and high-energy neutrinos using this device is expected, leading to revolutionary progress in particle astrophysics as a complementary technique to traditional astronomical observations at multiple wave...

  18. Bar Code Medication Administration Technology: Characterization of High-Alert Medication Triggers and Clinician Workarounds.

    Science.gov (United States)

    Miller, Daniel F; Fortier, Christopher R; Garrison, Kelli L

    2011-02-01

    Bar code medication administration (BCMA) technology is gaining acceptance for its ability to prevent medication administration errors. However, studies suggest that improper use of BCMA technology can yield unsatisfactory error prevention and introduction of new potential medication errors. To evaluate the incidence of high-alert medication BCMA triggers and alert types and discuss the type of nursing and pharmacy workarounds occurring with the use of BCMA technology and the electronic medication administration record (eMAR). Medication scanning and override reports from January 1, 2008, through November 30, 2008, for all adult medical/surgical units were retrospectively evaluated for high-alert medication system triggers, alert types, and override reason documentation. An observational study of nursing workarounds on an adult medicine step-down unit was performed and an analysis of potential pharmacy workarounds affecting BCMA and the eMAR was also conducted. Seventeen percent of scanned medications triggered an error alert of which 55% were for high-alert medications. Insulin aspart, NPH insulin, hydromorphone, potassium chloride, and morphine were the top 5 high-alert medications that generated alert messages. Clinician override reasons for alerts were documented in only 23% of administrations. Observational studies assessing for nursing workarounds revealed a median of 3 clinician workarounds per administration. Specific nursing workarounds included a failure to scan medications/patient armband and scanning the bar code once the dosage has been removed from the unit-dose packaging. Analysis of pharmacy order entry process workarounds revealed the potential for missed doses, duplicate doses, and doses being scheduled at the wrong time. BCMA has the potential to prevent high-alert medication errors by alerting clinicians through alert messages. Nursing and pharmacy workarounds can limit the recognition of optimal safety outcomes and therefore workflow processes

  19. The Resource utilization by ATLAS High Level Triggers. The contributed talk for the Technology and Instrumentation in Particle Physics 2011.

    CERN Document Server

    Ospanov, R; The ATLAS collaboration

    2011-01-01

    In 2010 the ATLAS experiment has successfully recorded data from LHC collisions with high efficiency and excellent data quality. ATLAS employs a three-level trigger system to select events of interest for physics analyses and detector commissioning. The trigger system consists of a custom-designed hardware trigger at level-1 (L1) and software algorithms executing on commodity servers at the two higher levels: second level trigger (L2) and event filter (EF). The corresponding trigger rates are 75~kHz, 3~kHz and 200~Hz. The L2 uses custom algorithms to examine a small fraction of data at full detector granularity in Regions of Interest selected by the L1. The EF employs offline algorithms and full detector data for more computationally intensive analysis. The trigger selection is defined by trigger menus which consist of more than 500 individual trigger signatures, such as electrons, muons, particle jets, etc. An execution of a trigger signature incurs computing and data storage costs. A composition of the depl...

  20. Double prospectively ECG-triggered high-pitch spiral acquisition for CT coronary angiography: Initial experience

    International Nuclear Information System (INIS)

    Wang, Q.; Qin, J.; He, B.; Zhou, Y.; Yang, J.-J.; Hou, X.-L.; Yang, X.-B.; Chen, J.-H.; Chen, Y.-D.

    2013-01-01

    Aim: To evaluate the feasibility of double prospectively electrocardiogram (ECG)-triggered high-pitch spiral acquisition mode (double high-pitch mode) for coronary computed tomography angiography (CTCA). Materials and methods: One hundred and forty-nine consecutive patients [40 women, 109 men; mean age 58.2 ± 9.2 years; sinus rhythm ≤70 beats/min (bpm) after pre-medication, body weight ≤100 kg] were enrolled for CTCA examinations using a dual-source CT system with 2 × 128 × 0.6 mm collimation, 0.28 s rotation time, and a pitch of 3.4. Double high-pitch mode was prospectively triggered first at 60% and later at 30% of the R–R interval within two cardiac cycles. Image quality was evaluated using a four-point scale (1 = excellent, 4 = non-assessable). Results: From 2085 coronary artery segments, 86.4% (1802/2085) were rated as having a score of 1, 12.3% (257/2085) as score of 2, 1.2% (26/2085) as score of 3, and none were rated as “non-assessable”. The average image quality score was 1.15 ± 0.26 on a per-segment basis. The effective dose was calculated by multiplying the coefficient factor of 0.028 by the dose–length product (DLP); the mean effective dose was 3.5 ± 0.8 mSv (range 1.7–7.6 mSv). The total dosage of contrast medium was 78.7 ± 2.9 ml. Conclusion: Double prospectively ECG-triggered high-pitch spiral acquisition mode provides good image quality with an average effective dose of less than 5 mSv in patients with a heart rate ≤70 bpm

  1. Four-channel high speed synchronized acquisition multiple trigger storage measurement system

    International Nuclear Information System (INIS)

    Guo Jian; Wang Wenlian; Zhang Zhijie

    2010-01-01

    A new storage measurement system based on the CPLD, MCU and FLASH (large-capacity flash memory) is proposed. The large capacity storage characteristic of the FLASH MEMORY is used to realize multi channel synchronized acquisition and the function of multiple records and read once. The function of multi channel synchronization, high speed data acquisition, the triggering several times, and the adjustability of working parameters expands the application of storage measurement system. The storage measurement system can be used in a variety of pressure and temperature test in explosion field. (authors)

  2. Emergency diesel generator reliability analysis high flux isotope reactor

    International Nuclear Information System (INIS)

    Merryman, L.; Christie, B.

    1993-01-01

    A program to apply some of the techniques of reliability engineering to the High Flux Isotope Reactor (HFIR) was started on August 8, 1992. Part of the program was to track the conditional probabilities of the emergency diesel generators responding to a valid demand. This was done to determine if the performance of the emergency diesel generators (which are more than 25 years old) has deteriorated. The conditional probabilities of the diesel generators were computed and trended for the period from May 1990 to December 1992. The calculations indicate that the performance of the emergency diesel generators has not deteriorated in recent years, i.e., the conditional probabilities of the emergency diesel generators have been fairly stable over the last few years. This information will be one factor than may be considered in the decision to replace the emergency diesel generators

  3. High Speed Simulation Framework for Reliable Logic Programs

    International Nuclear Information System (INIS)

    Lee, Wan-Bok; Kim, Seog-Ju

    2006-01-01

    This paper shows a case study of designing a PLC logic simulator that was developed to simulate and verify PLC control programs for nuclear plant systems. The nuclear control system requires strict restrictions rather than normal process control system does, since it works with nuclear power plants requiring high reliability under severe environment. One restriction is the safeness of the control programs which can be assured by exploiting severe testing. Another restriction is the simulation speed of the control programs, that should be fast enough to control multi devices concurrently in real-time. To cope with these restrictions, we devised a logic compiler which generates C-code programs from given PLC logic programs. Once the logic program was translated into C-code, the program could be analyzed by conventional software analysis tools and could be used to construct a fast logic simulator after cross-compiling, in fact, that is a kind of compiled-code simulation

  4. A High Reliability Gas-driven Helium Cryogenic Centrifugal Compressor

    CERN Document Server

    Bonneton, M; Gistau-Baguer, Guy M; Turcat, F; Viennot, P

    1998-01-01

    A helium cryogenic compressor was developed and tested in real conditions in 1996. The achieved objective was to compress 0.018 kg/s Helium at 4 K @ 1000 Pa (10 mbar) up to 3000 Pa (30 mbar). This project was an opportunity to develop and test an interesting new concept in view of future needs. The main features of this new specific technology are described. Particular attention is paid to the gas bearing supported rotor and to the pneumatic driver. Trade off between existing technologies and the present work are presented with special stress on the bearing system and the driver. The advantages are discussed, essentially focused on life time and high reliability without maintenance as well as non pollution characteristic. Practical operational modes are also described together with the experimental performances of the compressor. The article concludes with a brief outlook of future work.

  5. Towards a Level-1 tracking trigger for the ATLAS experiment at the High Luminosity LHC

    CERN Document Server

    Martin, T A D; The ATLAS collaboration

    2014-01-01

    The ability to apply fast processing that can take account of the properties of the tracks that are being reconstructed will enhance the rejection, while retaining high efficiency for events with desired signatures, such as high momentum leptons or multiple jets. Studies to understand the feasibility of such a system have begun, and proceed in two directions: a fast readout for high granularity silicon detectors, and a fast pattern recognition algorithm to be applied just after the Front-End readout for specific sub detectors. Both existing, and novel technologies can offer solutions. The aim of these studies is to determine the parameter space to which this system must be adapted. The status of ongoing tests on specific hardware components crucial for this system, both to increase the ATLAS physics potential and fully satisfy the trigger requirements at very high luminosities are discussed.

  6. Highly Efficient and Reliable Transparent Electromagnetic Interference Shielding Film.

    Science.gov (United States)

    Jia, Li-Chuan; Yan, Ding-Xiang; Liu, Xiaofeng; Ma, Rujun; Wu, Hong-Yuan; Li, Zhong-Ming

    2018-04-11

    Electromagnetic protection in optoelectronic instruments such as optical windows and electronic displays is challenging because of the essential requirements of a high optical transmittance and an electromagnetic interference (EMI) shielding effectiveness (SE). Herein, we demonstrate the creation of an efficient transparent EMI shielding film that is composed of calcium alginate (CA), silver nanowires (AgNWs), and polyurethane (PU), via a facile and low-cost Mayer-rod coating method. The CA/AgNW/PU film with a high optical transmittance of 92% achieves an EMI SE of 20.7 dB, which meets the requirements for commercial shielding applications. A superior EMI SE of 31.3 dB could be achieved, whereas the transparent film still maintains a transmittance of 81%. The integrated efficient EMI SE and high transmittance are superior to those of most previously reported transparent EMI shielding materials. Moreover, our transparent films exhibit a highly reliable shielding ability in a complex service environment, with 98 and 96% EMI SE retentions even after 30 min of ultrasound treatment and 5000 bending cycles (1.5 mm radius), respectively. The comprehensive performance that is associated with the facile fabrication strategy imparts the CA/AgNW/PU film with great potential as an optimized EMI shielding material in emerging optoelectronic devices, such as flexible solar cells, displays, and touch panels.

  7. Breakover mechanism of GaAs photoconductive switch triggering spark gap for high power applications

    Science.gov (United States)

    Tian, Liqiang; Shi, Wei; Feng, Qingqing

    2011-11-01

    A spark gap (SG) triggered by a semi-insulating GaAs photoconductive semiconductor switch (PCSS) is presented. Currents as high as 5.6 kA have been generated using the combined switch, which is excited by a laser pulse with energy of 1.8 mJ and under a bias of 4 kV. Based on the transferred-electron effect and gas streamer theory, the breakover characteristics of the combined switch are analyzed. The photoexcited carrier density in the PCSS is calculated. The calculation and analysis indicate that the PCSS breakover is caused by nucleation of the photoactivated avalanching charge domain. It is shown that the high output current is generated by the discharge of a high-energy gas streamer induced by the strong local electric field distortion or by overvoltage of the SG resulting from quenching of the avalanching domain, and periodic oscillation of the current is caused by interaction between the gas streamer and the charge domain. The cycle of the current oscillation is determined by the rise time of the triggering electric pulse generated by the PCSS, the pulse transmission time between the PCSS and the SG, and the streamer transit time in the SG.

  8. Concept of a Stand-Alone Muon Trigger with High Transverse Momentum Resolution for the ATLAS Detector at the High-Luminosity LHC

    CERN Document Server

    Horii, Yasuyuki; The ATLAS collaboration

    2014-01-01

    The ATLAS trigger uses a three-level trigger system. The level-1 (L1) trigger for muons with high transverse momentum pT in ATLAS is based on fast chambers with excellent time resolution which are able to identify muons coming from a particular beam crossing. These trigger chambers also provide a fast measurement of the muon transverse momenta, however with limited accuracy caused by the moderate spatial resolution along the deflecting direction of the magnetic field. The higher luminosity foreseen for Phase-II puts stringent limits on the L1 trigger rates. A way to control these rates is the improvement of the spatial resolution of the triggering device which drastically sharpens the turn-on curve of the L1 trigger. To do this, the precision tracking chambers (MDT) can be used in the L1 trigger, if the corresponding trigger latency is increased as planned. The trigger rate reduction is accomplished by strongly decreasing the rate of triggers from muons with pT lower than a predefined threshold (typically 20 ...

  9. A Highly Selective First-Level Muon Trigger With MDT Chamber Data for ATLAS at HL-LHC

    CERN Document Server

    Nowak, Sebastian; The ATLAS collaboration

    2015-01-01

    Highly selective triggers are essential for the physics programme of the ATLAS experiment at HL-LHC where the instantaneous luminosity will be about an order of magnitude larger than the LHC design luminosity. The Level-1 muon trigger rate is dominated by low momentum muons below the nominal trigger threshold due to the limited momentum resolution of the Resistive Plate and Thin Gap trigger chambers. The resulting high trigger rates at HL-LHC can be sufficient reduced by using the data of the precision Muon Drift Tube chambers for the trigger decision. This requires the implementation of a fast MDT read-out chain and of a fast MDT track reconstruction algorithm with a latency of at most 6~$\\mu$s. A hardware demonstrator of the fast read-out chain has been successfully tested at the high HL-LHC background rates at the CERN Gamma Irradiation Facility. The fast track reconstruction algorithm has been implemented on a fas trigger processor.

  10. Novel Low Cost, High Reliability Wind Turbine Drivetrain

    Energy Technology Data Exchange (ETDEWEB)

    Chobot, Anthony; Das, Debarshi; Mayer, Tyler; Markey, Zach; Martinson, Tim; Reeve, Hayden; Attridge, Paul; El-Wardany, Tahany

    2012-09-13

    Clipper Windpower, in collaboration with United Technologies Research Center, the National Renewable Energy Laboratory, and Hamilton Sundstrand Corporation, developed a low-cost, deflection-compliant, reliable, and serviceable chain drive speed increaser. This chain and sprocket drivetrain design offers significant breakthroughs in the areas of cost and serviceability and addresses the key challenges of current geared and direct-drive systems. The use of gearboxes has proven to be challenging; the large torques and bending loads associated with use in large multi-MW wind applications have generally limited demonstrated lifetime to 8-10 years [1]. The large cost of gearbox replacement and the required use of large, expensive cranes can result in gearbox replacement costs on the order of $1M, representing a significant impact to overall cost of energy (COE). Direct-drive machines eliminate the gearbox, thereby targeting increased reliability and reduced life-cycle cost. However, the slow rotational speeds require very large and costly generators, which also typically have an undesirable dependence on expensive rare-earth magnet materials and large structural penalties for precise air gap control. The cost of rare-earth materials has increased 20X in the last 8 years representing a key risk to ever realizing the promised cost of energy reductions from direct-drive generators. A common challenge to both geared and direct drive architectures is a limited ability to manage input shaft deflections. The proposed Clipper drivetrain is deflection-compliant, insulating later drivetrain stages and generators from off-axis loads. The system is modular, allowing for all key parts to be removed and replaced without the use of a high capacity crane. Finally, the technology modularity allows for scalability and many possible drivetrain topologies. These benefits enable reductions in drivetrain capital cost by 10.0%, levelized replacement and O&M costs by 26.7%, and overall cost of

  11. RPC based 5D tracking concept for high multiplicity tracking trigger

    CERN Document Server

    Aielli, G; Cardarelli, R; Di Ciaccio, A; Distante, L; Liberti, B; Paolozzi, L; Pastori, E; Santonico, R

    2018-01-01

    The recently approved High Luminosity LHC project (HL-LHC) and the future col- liders proposals present a challenging experimental scenario, dominated by high pileup, radiation background and a bunch crossing time possibly shorter than 5 ns. This holds as well for muon systems, where RPCs can play a fundamental role in the design of the future experiments. The RPCs, thanks to their high space-time granularity, allows a sparse representation of the particle hits, in a very large parametric space containing, in addition to 3D spatial localization, also the pulse time and width associated to the avalanche charge. This 5D representation of the hits can be exploited to improve the performance of complex detectors such as muon systems and increase the discovery potential of a future experiment, by allowing a better track pileup rejection and sharper momentum resolution, an effective measurement of the particle velocity, to tag and trigger the non- ultrarelativistic particles, and the detection local multiple track ...

  12. Educational Management Organizations as High Reliability Organizations: A Study of Victory's Philadelphia High School Reform Work

    Science.gov (United States)

    Thomas, David E.

    2013-01-01

    This executive position paper proposes recommendations for designing reform models between public and private sectors dedicated to improving school reform work in low performing urban high schools. It reviews scholarly research about for-profit educational management organizations, high reliability organizations, American high school reform, and…

  13. A Highly Selective First-Level Muon Trigger With MDT Chamber Data for ATLAS at HL-LHC

    CERN Document Server

    INSPIRE-00390105

    2016-07-11

    Highly selective triggers are essential for the physics programme of the ATLAS experiment at HL-LHC where the instantaneous luminosity will be about an order of magnitude larger than the LHC instantaneous luminosity in Run 1. The first level muon trigger rate is dominated by low momentum muons below the nominal trigger threshold due to the moderate momentum resolution of the Resistive Plate and Thin Gap trigger chambers. The resulting high trigger rates at HL-LHC can be su?ciently reduced by using the data of the precision Muon Drift Tube chambers for the trigger decision. This requires the implementation of a fast MDT read-out chain and of a fast MDT track reconstruction algorithm with a latency of at most 6 microseconds. A hardware demonstrator of the fast read-out chain has been successfully tested at the HL-LHC operating conditions at the CERN Gamma Irradiation Facility. The fast track reconstruction algorithm has been implemented on a fast trigger processor.

  14. Gearbox Reliability Collaborative High Speed Shaft Tapered Roller Bearing Calibration

    Energy Technology Data Exchange (ETDEWEB)

    Keller, J.; Guo, Y.; McNiff, B.

    2013-10-01

    The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) is a project investigating gearbox reliability primarily through testing and modeling. Previous dynamometer testing focused upon acquiring measurements in the planetary section of the test gearbox. Prior to these tests, the strain gages installed on the planetary bearings were calibrated in a load frame.

  15. Error detection, handling and recovery at the High Level Trigger of the ATLAS experiment at the LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00223972; The ATLAS collaboration

    2016-01-01

    The complexity of the ATLAS High Level Trigger (HLT) requires a robust system for error detection and handling during online data-taking; it also requires an offline system for the recovery of events where no trigger decision could be made online. The error detection and handling ensure smooth operation of the trigger system and provide debugging information necessary for offline analysis and diagnosis. In this presentation, we give an overview of the error detection, handling and recovery of problematic events at the HLT of ATLAS.

  16. High School Dropout in Proximal Context: The Triggering Role of Stressful Life Events.

    Science.gov (United States)

    Dupéré, Véronique; Dion, Eric; Leventhal, Tama; Archambault, Isabelle; Crosnoe, Robert; Janosz, Michel

    2018-03-01

    Adolescents who drop out of high school experience enduring negative consequences across many domains. Yet, the circumstances triggering their departure are poorly understood. This study examined the precipitating role of recent psychosocial stressors by comparing three groups of Canadian high school students (52% boys; M age  = 16.3 years; N = 545): recent dropouts, matched at-risk students who remain in school, and average students. Results indicate that in comparison with the two other groups, dropouts were over three times more likely to have experienced recent acute stressors rated as severe by independent coders. These stressors occurred across a variety of domains. Considering the circumstances in which youth decide to drop out has implications for future research and for policy and practice. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  17. LHCb: LHCb High Level Trigger design issues for post Long Stop 1 running

    CERN Multimedia

    Albrecht, J; Raven, G; Sokoloff, M D; Williams, M

    2013-01-01

    The LHCb High Level Trigger uses two stages of software running on an Event Filter Farm (EFF) to select events for offline reconstruction and analysis. The first stage (Hlt1) processes approximately 1 MHz of events accepted by a hardware trigger. In 2012, the second stage (Hlt2) wrote 5 kHz to permanent storage for later processing. Following the LHC's Long Stop 1 (anticipated for 2015), the machine energy will increase from 8 TeV in the center-of-mass to 13 TeV and the cross sections for beauty and charm are expected to grow proportionately. We plan to increase the Hlt2 output to 12 kHz, some for immediate offline processing, some for later offline processing, and some ready for immediate analysis. By increasing the absolute computing power of the EFF, and buffering data for processing between machine fills, we should be able to significantly increase the efficiency for signal while improving signal-to-background ratios. In this poster we will present several strategies under consideration and some of th...

  18. Prototype of a file-based high-level trigger in CMS

    International Nuclear Information System (INIS)

    Bauer, G; Darlea, G-L; Gomez-Ceballos, G; Bawej, T; Chaze, O; Coarasa, J A; Deldicque, C; Dobson, M; Dupont, A; Gigi, D; Glege, F; Gomez-Reino, R; Hartl, C; Hegeman, J; Masetti, L; Behrens, U; Branson, J; Cittolin, S; Holzner, A; Erhan, S

    2014-01-01

    The DAQ system of the CMS experiment at the LHC is upgraded during the accelerator shutdown in 2013/14. To reduce the interdependency of the DAQ system and the high-level trigger (HLT), we investigate the feasibility of using a file-system-based HLT. Events of ∼1 MB size are built at the level-1 trigger rate of 100 kHz. The events are assembled by ∼50 builder units (BUs). Each BU writes the raw events at ∼2GB/s to a local file system shared with Q(10) filter-unit machines (FUs) running the HLT code. The FUs read the raw data from the file system, select Q(1%) of the events, and write the selected events together with monitoring meta-data back to a disk. This data is then aggregated over several steps and made available for offline reconstruction and online monitoring. We present the challenges, technical choices, and performance figures from the prototyping phase. In addition, the steps to the final system implementation will be discussed.

  19. Real-time configuration changes of the ATLAS High Level Trigger

    CERN Document Server

    Winklmeier, F

    2010-01-01

    The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage trigger and event filter running on a farm of commodity PC hardware. Currently the system consists of about 850 processing nodes and will be extended incrementally following the expected increase in luminosity of the LHC to about 2000 nodes. The event selection within the HLT applications is carried out by specialized reconstruction algorithms. The selection can be controlled via properties that are stored in a central database and are retrieved at the startup of the HLT processes, which then usually run continuously for many hours. To be able to respond to changes in the LHC beam conditions, it is essential that the algorithms can be re-configured without disrupting data taking while ensuring a consistent and reproducible configuration across the entire HLT farm. The technique...

  20. Wire chamber as a fast, high efficiency and low mass trigger in high magnetic fields

    International Nuclear Information System (INIS)

    Lachin, Y.Y.; Miassoedov, L.V.; Morozov, I.V.; Selivanov, V.I.; Sinitzin, I.V.; Torokhov, V.D.

    1994-11-01

    The efficiency and time jitter measurement results are presented for proportional and drift chambers with 2 mm half gap and CF 4 :iC 4 H 10 (80:20) gas mixture in the presence of magnetic field. Data were taken on M15 beam line at TRIUMF for positrons with momentum 35 MeV/c. It is demonstrated that two layers of PCs when combined have better than 99.995% efficiency of positron detection in magnetic fields up to 6 T. The time jitter (RMS) of the sum signal from three layers of PCs does not exceed 2.3, 2.9 and 3.9 ns at B = 0, 3, 6 T respectively. The time shift of this signal does not exceed 2.0, 2.25 and 4.4 ns at 8 = 0, 3, 6 T respectively for the positron's incident angle (with respect to PC plane normal) range from 0 to 60 o . Such PCs will serve as zero time trigger for PDC chambers with DME gas in TRIUMF Experiment 614 [1]. (author). 12 refs., 3 tabs., 9 figs

  1. High reliability flow system - an assessment of pump reliability and optimisation of the number of pumps

    International Nuclear Information System (INIS)

    Butterfield, J.M.

    1981-01-01

    A system is considered where a number of pumps operate in parallel. Normally, all pumps operate, driven by main motors fed from the grid. Each pump has a pony motor fed from an individual battery supply. Each pony motor is normally running, but not engaged to the pump shaft. On demand, e.g. failure of grid supplies, each pony motor is designed to clutch-in automatically when the pump speed falls to a specified value. The probability of all the pony motors failing to clutch-in on demand must be demonstrated with 95% confidence to be less than 10 -8 per demand. This assessment considers how the required reliability of pony motor drives might be demonstrated in practice and the implications on choice of the number of pumps at the design stage. The assessment recognises that not only must the system prove to be extremely reliable, but that demonstration that reliability is adequate must be done during plant commissioning, with practical limits on the amount of testing performed. It is concluded that a minimum of eight pony motors should be provided, eight pumps each with one pony motor (preferred) or five pumps each with two independent pony motors. A minimum of two diverse pony motor systems should be provided. (author)

  2. Multi­-Threaded Algorithms for General purpose Graphics Processor Units in the ATLAS High Level Trigger

    CERN Document Server

    Conde Mui\\~no, Patricia; The ATLAS collaboration

    2016-01-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with level 1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz level 1 acceptance rate to 1 kHz for recording, requiring an average per­-event processing time of ~250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant ...

  3. Compliance and High Reliability in a Complex Healthcare Organization.

    Science.gov (United States)

    Simon, Maxine dellaBadia

    2018-01-01

    When considering the impact of regulation on healthcare, visualize a spider's web. The spider weaves sections together to create the whole, with each fiber adding to the structure to support its success or lead to its failure. Each section is dependent on the others, and all must be aligned to maintain the structure. Outside forces can cause a shift in the web's fragile equilibrium.The interdependence of the sections of the spider's web is similar to the way hospital departments and services work together. An organization's structure must be shaped to support its mission and vision. At the same time, the business of healthcare requires the development and achievement of operational objectives and financial performance goals. Establishing a culture that is flexible enough to permit creativity, provide resiliency, and manage complexity as the organization grows is fundamental to success. An organization must address each of these factors while maintaining stability, carrying out its mission, and fostering improvement.Nature's order maintains the spider's web. Likewise, regulation can strengthen healthcare organizations by initiating disruptive changes that can support efforts to achieve and sustain high reliability in the delivery of care. To that end, leadership must be willing to provide the necessary vision and resources.

  4. Hypoxia triggers high-altitude headache with migraine features: A prospective trial.

    Science.gov (United States)

    Broessner, Gregor; Rohregger, Johanna; Wille, Maria; Lackner, Peter; Ndayisaba, Jean-Pierre; Burtscher, Martin

    2016-07-01

    Given the high prevalence and clinical impact of high-altitude headache (HAH), a better understanding of risk factors and headache characteristics may give new insights into the understanding of hypoxia being a trigger for HAH or even migraine attacks. In this prospective trial, we simulated high altitude (4500 m) by controlled normobaric hypoxia (FiO2 = 12.6%) to investigate acute mountain sickness (AMS) and headache characteristics. Clinical symptoms of AMS according to the Lake Louise Scoring system (LLS) were recorded before and after six and 12 hours in hypoxia. O2 saturation was measured using pulse oximetry at the respective time points. History of primary headache, especially episodic or chronic migraine, was a strict exclusion criterion. In total 77 volunteers (43 (55.8%) males, 34 (44.2%) females) were enrolled in this study. Sixty-three (81.18%) and 40 (71.4%) participants developed headache at six or 12 hours, respectively, with height and SpO2 being significantly different between headache groups at six hours (p headache development (p headache according to the International Classification of Headache Disorders (ICHD-3 beta) in n = 5 (8%) or n = 6 (15%), at six and 12 hours, respectively. Normobaric hypoxia is a trigger for HAH and migraine-like headache attacks even in healthy volunteers without any history of migraine. Our study confirms the pivotal role of hypoxia in the development of AMS and beyond that suggests hypoxia may be involved in migraine pathophysiology. © International Headache Society 2015.

  5. Soft Pneumatic Actuator Fascicles for High Force and Reliability.

    Science.gov (United States)

    Robertson, Matthew A; Sadeghi, Hamed; Florez, Juan Manuel; Paik, Jamie

    2017-03-01

    Soft pneumatic actuators (SPAs) are found in mobile robots, assistive wearable devices, and rehabilitative technologies. While soft actuators have been one of the most crucial elements of technology leading the development of the soft robotics field, they fall short of force output and bandwidth requirements for many tasks. In addition, other general problems remain open, including robustness, controllability, and repeatability. The SPA-pack architecture presented here aims to satisfy these standards of reliability crucial to the field of soft robotics, while also improving the basic performance capabilities of SPAs by borrowing advantages leveraged ubiquitously in biology; namely, the structured parallel arrangement of lower power actuators to form the basis of a larger and more powerful actuator module. An SPA-pack module consisting of a number of smaller SPAs will be studied using an analytical model and physical prototype. Experimental measurements show an SPA pack to generate over 112 N linear force, while the model indicates the benefit of parallel actuator grouping over a geometrically equivalent single SPA scale as an increasing function of the number of individual actuators in the group. For a module of four actuators, a 23% increase in force production over a volumetrically equivalent single SPA is predicted and validated, while further gains appear possible up to 50%. These findings affirm the advantage of utilizing a fascicle structure for high-performance soft robotic applications over existing monolithic SPA designs. An example of high-performance soft robotic platform will be presented to demonstrate the capability of SPA-pack modules in a complete and functional system.

  6. Soft Pneumatic Actuator Fascicles for High Force and Reliability

    Science.gov (United States)

    Robertson, Matthew A.; Sadeghi, Hamed; Florez, Juan Manuel

    2017-01-01

    Abstract Soft pneumatic actuators (SPAs) are found in mobile robots, assistive wearable devices, and rehabilitative technologies. While soft actuators have been one of the most crucial elements of technology leading the development of the soft robotics field, they fall short of force output and bandwidth requirements for many tasks. In addition, other general problems remain open, including robustness, controllability, and repeatability. The SPA-pack architecture presented here aims to satisfy these standards of reliability crucial to the field of soft robotics, while also improving the basic performance capabilities of SPAs by borrowing advantages leveraged ubiquitously in biology; namely, the structured parallel arrangement of lower power actuators to form the basis of a larger and more powerful actuator module. An SPA-pack module consisting of a number of smaller SPAs will be studied using an analytical model and physical prototype. Experimental measurements show an SPA pack to generate over 112 N linear force, while the model indicates the benefit of parallel actuator grouping over a geometrically equivalent single SPA scale as an increasing function of the number of individual actuators in the group. For a module of four actuators, a 23% increase in force production over a volumetrically equivalent single SPA is predicted and validated, while further gains appear possible up to 50%. These findings affirm the advantage of utilizing a fascicle structure for high-performance soft robotic applications over existing monolithic SPA designs. An example of high-performance soft robotic platform will be presented to demonstrate the capability of SPA-pack modules in a complete and functional system. PMID:28289573

  7. Instrumentation of a Level-1 Track Trigger in the ATLAS detector for the High Luminosity LHC

    CERN Document Server

    Boisvert, V; The ATLAS collaboration

    2012-01-01

    The Large Hadron Collider will be upgraded in order to reach an instantaneous luminosity of $L=5 \\times 10^{34}$ cm$^{-2}$ s$^{-1}$. A challenge for the detectors will be to cope with the excessive rate of events coming into the trigger system. In order to maintain the capability of triggering on single lepton objects with momentum thresholds of $p_T 25$ GeV, the ATLAS detector is planning to use tracking information at the Level-1 (hardware) stage of the trigger system. Two options are currently being studied: a L0/L1 trigger design using a double buffer front-end architecture and a single hardware trigger level which uses trigger layers in the new tracker system. Both options are presented as well as results from simulation studies.

  8. Reliability and Characterization of High Voltage Power Capacitors

    Science.gov (United States)

    2014-03-01

    in a Faraday cage to minimize external noise. In addition, electron microscopy could also be performed to identify the change in trap concentration...military bases in the United States. Energy product reliability affects the sustainability and cost- effectiveness of these systems, which must be tested...Energy product reliability affects the sustainability and cost- effectiveness of these systems, which must be tested by outside entities to ensure

  9. The upgrade of the ATLAS High Level Trigger and Data Acquisition systems and their integration

    CERN Document Server

    Abreu, R; The ATLAS collaboration

    2014-01-01

    The Data Acquisition (DAQ) and High Level Trigger (HLT) systems that served the ATLAS experiment during LHC's first run are being upgraded in the first long LHC shutdown period, from 2013 to 2015. This contribution describes the elements that are vital for the new interaction between the two systems. The central architectural enhancement is the fusion of the once separate Level 2, Event Building (EB), and Event Filter steps. Through the factorization of previously disperse functionality and better exploitation of caching mechanisms, the inherent simplification carries with it an increase in performance. Flexibility to different running conditions is improved by an automatic balance of formerly separate tasks. Incremental EB is the principle of the new Data Collection, whereby the HLT farm avoids duplicate requests to the detector Read-Out System (ROS) by preserving and reusing previously obtained data. Moreover, requests are packed and fetched together to avoid redundant trips to the ROS. Anticipated EB is ac...

  10. Development of High Level Trigger Software for Belle II at SuperKEKB

    International Nuclear Information System (INIS)

    Lee, S; Itoh, R; Katayama, N; Mineo, S

    2011-01-01

    The Belle collaboration has been trying for 10 years to reveal the mystery of the current matter-dominated universe. However, much more statistics is required to search for New Physics through quantum loops in decays of B mesons. In order to increase the experimental sensitivity, the next generation B-factory, SuperKEKB, is planned. The design luminosity of SuperKEKB is 8 x 10 35 cm −2 s −1 a factor 40 above KEKB's peak luminosity. At this high luminosity, the level 1 trigger of the Belle II experiment will stream events of 300 kB size at a 30 kHz rate. To reduce the data flow to a manageable level, a high-level trigger (HLT) is needed, which will be implemented using the full offline reconstruction on a large scale PC farm. There, physics level event selection is performed, reducing the event rate by ∼ 10 to a few kHz. To execute the reconstruction the HLT uses the offline event processing framework basf2, which has parallel processing capabilities used for multi-core processing and PC clusters. The event data handling in the HLT is totally object oriented utilizing ROOT I/O with a new method of object passing over the UNIX socket connection. Also under consideration is the use of the HLT output as well to reduce the pixel detector event size by only saving hits associated with a track, resulting in an additional data reduction of ∼ 100 for the pixel detector. In this contribution, the design and implementation of the Belle II HLT are presented together with a report of preliminary testing results.

  11. Modeling high-Power Accelerators Reliability-SNS LINAC (SNS-ORNL); MAX LINAC (MYRRHA)

    International Nuclear Information System (INIS)

    Pitigoi, A. E.; Fernandez Ramos, P.

    2013-01-01

    Improving reliability has recently become a very important objective in the field of particle accelerators. The particle accelerators in operation are constantly undergoing modifications, and improvements are implemented using new technologies, more reliable components or redundant schemes (to obtain more reliability, strength, more power, etc.) A reliability model of SNS (Spallation Neutron Source) LINAC has been developed within MAX project and analysis of the accelerator systems reliability has been performed within the MAX project, using the Risk Spectrum reliability analysis software. The analysis results have been evaluated by comparison with the SNS operational data. Results and conclusions are presented in this paper, oriented to identify design weaknesses and provide recommendations for improving reliability of MYRRHA linear accelerator. The SNS reliability model developed for the MAX preliminary design phase indicates possible avenues for further investigation that could be needed to improve the reliability of the high-power accelerators, in view of the future reliability targets of ADS accelerators.

  12. High Reliability R-10 Windows Using Vacuum Insulating Glass Units

    Energy Technology Data Exchange (ETDEWEB)

    Stark, David

    2012-08-16

    The objective of this effort was for EverSealed Windows (“EverSealed” or “ESW”) to design, assemble, thermally and environmentally test and demonstrate a Vacuum Insulating Glass Unit (“VIGU” or “VIG”) that would enable a whole window to meet or exceed the an R-10 insulating value (U-factor ≤ 0.1). To produce a VIGU that could withstand any North American environment, ESW believed it needed to design, produce and use a flexible edge seal system. This is because a rigid edge seal, used by all other know VIG producers and developers, limits the size and/or thermal environment of the VIG to where the unit is not practical for typical IG sizes and cannot withstand severe outdoor environments. The rigid-sealed VIG’s use would be limited to mild climates where it would not have a reasonable economic payback when compared to traditional double-pane or triple-pane IGs. ESW’s goals, in addition to achieving a sufficiently high R-value to enable a whole window to achieve R-10, included creating a VIG design that could be produced for a cost equal to or lower than a traditional triple-pane IG (low-e, argon filled). ESW achieved these goals. EverSealed produced, tested and demonstrated a flexible edge-seal VIG that had an R-13 insulating value and the edge-seal system durability to operate reliably for at least 40 years in the harshest climates of North America.

  13. Development of a highly selective muon trigger exploiting the high spatial resolution of monitored drift-tube chambers for the ATLAS experiment at the HL-LHC

    CERN Document Server

    Kortner, Oliver; The ATLAS collaboration

    2018-01-01

    The High-Luminosity LHC will provide the unique opportunity to explore the nature of physics beyond the Standard Model. Highly selective first level triggers are essential for the physics programme of the ATLAS experiment at the HL-LHC, where the instantaneous luminosity will exceed the LHC design instantaneous luminosity by almost an order of magnitude. The ATLAS first level muon trigger rate is dominated by low momentum muons, selected due to the moderate momentum resolution of the current system. This first level trigger limitation can be overcome by including data from the precision muon drift tube (MDT) chambers. This requires the fast continuous transfer of the MDT hits to the off-detector trigger logic and a fast track reconstruction algorithm performed in the trigger logic. The feasibility of this approach was studied with LHC collision data and simulated data. Two main options for the hardware implementation will be studied with demonstrators: an FPGA based option with an embedded ARM microprocessor ...

  14. Development of a Highly Selective Muon Trigger Exploiting the High Spatial Resolution of Monitored Drift-Tube Chambers for the ATLAS Experiment at the HL-LHC

    CERN Document Server

    Kortner, Oliver; The ATLAS collaboration

    2018-01-01

    The High-Luminosity LHC will provide the unique opportunity to explore the nature of physics beyond the Standard Model. Highly selective first level triggers are essential for the physics programme of the ATLAS experiment at the HL-LHC, where the instantaneous luminosity will exceed the LHC design instantaneous luminosity by almost an order of magnitude. The ATLAS first level muon trigger rate is dominated by low momentum muons, selected due to the moderate momentum resolution of the current system. This first level trigger limitation can be overcome by including data from the precision muon drift tube (MDT) chambers. This requires the fast continuous transfer of the MDT hits to the off-detector trigger logic and a fast track reconstruction algorithm performed in the trigger logic. The feasibility of this approach was studied with LHC collision data and simulated data. Two main options for the hardware implementation are currently studied with demonstrators, an FPGA based option with an embedded ARM microproc...

  15. A new Highly Selective First Level ATLAS Muon Trigger With MDT Chamber Data for HL-LHC

    CERN Document Server

    Nowak, Sebastian; The ATLAS collaboration

    2015-01-01

    Highly selective first level triggers are essential for the physics programme of the ATLAS experiment at the HL-LHC where the instantaneous luminosity will exceed the LHC's instantaneous luminosity by almost an order of magnitude. The ATLAS first level muon trigger rate is dominated by low momentum sub-trigger threshold muons due to the poor momentum resolution at trigger level caused by the moderate spatial resolution of the resistive plate and thin gap trigger chambers. This limitation can be overcome by including the data of the precision muon drift tube chambers in the first level trigger decision. This requires the implementation of a fast MDT read-out chain and a fast MDT track reconstruction. A hardware demonstrator of the fast read-out chain was successfully tested under HL-LHC operating conditions at CERN's Gamma Irradiation Facility. It could be shown that the data provided by the demonstrator can be processed with a fast track reconstruction algorithm on an ARM CPU within the 6 microseconds latency...

  16. The ATLAS High Level Trigger Configuration and Steering, Experience with the First 7 TeV Collisions

    CERN Document Server

    Stelzer, J; The ATLAS collaboration

    2011-01-01

    In March 2010 the four LHC experiments saw the first proton-proton collisions at a center-of-mass energy of 7 TeV. Still within the year a collision rate of nearly 10 MHz was expected. At ATLAS, events of potential physics interest for are selected by a three-level trigger system, with a final recording rate of about 200 Hz. The first level (L1) is implemented in customized hardware, the two levels of the high level trigger (HLT) are software triggers. For the ATLAS physics program more than 500 trigger signatures are defined. The HLT tests each signature on each L1-accepted event, the test outcome is recorded for later analysis. The HLT-Steering is responsible for this. It foremost ensures the independence of each signature test and an unbiased trigger decisions. Yet, to minimize data readout and execution time, cached detector data and once-calculated trigger objects are reused to form the decision. Some signature tests are performed only on a scaled-down fraction of candidate events, in order to reduce the...

  17. Reliability test and failure analysis of high power LED packages

    International Nuclear Information System (INIS)

    Chen Zhaohui; Zhang Qin; Wang Kai; Luo Xiaobing; Liu Sheng

    2011-01-01

    A new type application specific light emitting diode (LED) package (ASLP) with freeform polycarbonate lens for street lighting is developed, whose manufacturing processes are compatible with a typical LED packaging process. The reliability test methods and failure criterions from different vendors are reviewed and compared. It is found that test methods and failure criterions are quite different. The rapid reliability assessment standards are urgently needed for the LED industry. 85 0 C/85 RH with 700 mA is used to test our LED modules with three other vendors for 1000 h, showing no visible degradation in optical performance for our modules, with two other vendors showing significant degradation. Some failure analysis methods such as C-SAM, Nano X-ray CT and optical microscope are used for LED packages. Some failure mechanisms such as delaminations and cracks are detected in the LED packages after the accelerated reliability testing. The finite element simulation method is helpful for the failure analysis and design of the reliability of the LED packaging. One example is used to show one currently used module in industry is vulnerable and may not easily pass the harsh thermal cycle testing. (semiconductor devices)

  18. RF-MEMS capacitive switches with high reliability

    Science.gov (United States)

    Goldsmith, Charles L.; Auciello, Orlando H.; Carlisle, John A.; Sampath, Suresh; Sumant, Anirudha V.; Carpick, Robert W.; Hwang, James; Mancini, Derrick C.; Gudeman, Chris

    2013-09-03

    A reliable long life RF-MEMS capacitive switch is provided with a dielectric layer comprising a "fast discharge diamond dielectric layer" and enabling rapid switch recovery, dielectric layer charging and discharging that is efficient and effective to enable RF-MEMS switch operation to greater than or equal to 100 billion cycles.

  19. Models of Information Security Highly Reliable Computing Systems

    Directory of Open Access Journals (Sweden)

    Vsevolod Ozirisovich Chukanov

    2016-03-01

    Full Text Available Methods of the combined reservation are considered. The models of reliability of systems considering parameters of restoration and prevention of blocks of system are described. Ratios for average quantity prevention and an availability quotient of blocks of system are given.

  20. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  1. Studies of Read-Out Electronics and Trigger for Muon Drift Tube Detectors at High Luminosities

    CERN Document Server

    Nowak, Sebastian

    The Large Hadron Collider (LHC) at the European Centre for Particle Physics, CERN, collides protons with an unprecedentedly high centre-of-mass energy and luminosity. The collision products are recorded and analysed by four big experiments, one of which is the ATLAS detector. For precise measurements of the properties of the Higgs-Boson and searches for new phenomena beyond the Standard Model, the LHC luminosity of $L=10^{34}cm^{-2}s^{-1}$ is planned to be increased by a factor of ten leading to the High Luminosity LHC (HL-LHC). In order to cope with the higher background and data rates, the LHC experiments need to be upgraded. In this thesis, studies for the upgrade of the ATLAS Muon Spectrometer are presented with respect to the read-out electronics of the Monitored Drift Tube (MDT) and the small-diameter Muon Drift Tube (sMDT) chambers and the Level-1 muon trigger. Due to the reduced tube diameter of sMDT chambers, background occupancy and space charge effects are suppressed by an order of magnitude compar...

  2. Triggering the Chemical Instability of an Ionic Liquid under High Pressure.

    Science.gov (United States)

    Faria, Luiz F O; Nobrega, Marcelo M; Temperini, Marcia L A; Bini, Roberto; Ribeiro, Mauro C C

    2016-09-01

    Ionic liquids are an interesting class of materials due to their distinguished properties, allowing their use in an impressive range of applications, from catalysis to hypergolic fuels. However, the reactivity triggered by the application of high pressure can give rise to a new class of materials, which is not achieved under normal conditions. Here, we report on the high-pressure chemical instability of the ionic liquid 1-allyl-3-methylimidazolium dicyanamide, [allylC1im][N(CN)2], probed by both Raman and IR techniques and supported by quantum chemical calculations. Our results show a reaction occurring above 8 GPa, involving the terminal double bond of the allyl group, giving rise to an oligomeric product. The results presented herein contribute to our understanding of the stability of ionic liquids, which is of paramount interest for engineering applications. Moreover, gaining insight into this peculiar kind of reactivity could lead to the development of new or alternative synthetic routes to achieve, for example, poly(ionic liquids).

  3. Flexible event reconstruction software chains with the ALICE High-Level Trigger

    International Nuclear Information System (INIS)

    Ram, D; Breitner, T; Szostak, A

    2012-01-01

    The ALICE High-Level Trigger (HLT) has a large high-performance computing cluster at CERN whose main objective is to perform real-time analysis on the data generated by the ALICE experiment and scale it down to at-most 4GB/sec - which is the current maximum mass-storage bandwidth available. Data-flow in this cluster is controlled by a custom designed software framework. It consists of a set of components which can communicate with each other via a common control interface. The software framework also supports the creation of different configurations based on the detectors participating in the HLT. These configurations define a logical data processing “chain” of detector data-analysis components. Data flows through this software chain in a pipelined fashion so that several events can be processed at the same time. An instance of such a chain can run and manage a few thousand physics analysis and data-flow components. The HLT software and the configuration scheme used in the 2011 heavy-ion runs of ALICE, has been discussed in this contribution.

  4. Triggering, front-end electronics, and data acquisition for high-rate beauty experiments

    International Nuclear Information System (INIS)

    Johnson, M.; Lankford, A.J.

    1988-04-01

    The working group explored the feasibility of building a trigger and an electronics data acquisition system for both collider and fixed target experiments. There appears to be no fundamental technical limitation arising from either the rate or the amount of data for a collider experiment. The fixed target experiments will likely require a much higher rate because of the smaller cross section. Rates up to one event per RF bucket (50 MHz) appear to be feasible. Higher rates depend on the details of the particular experiment and trigger. Several ideas were presented on multiplicity jump and impact parameter triggers for fixed target experiments. 14 refs., 3 figs

  5. A Novel in situ Trigger Combination Method

    International Nuclear Information System (INIS)

    Buzatu, Adrian; Warburton, Andreas; Krumnack, Nils; Yao, Wei-Ming

    2012-01-01

    Searches for rare physics processes using particle detectors in high-luminosity colliding hadronic beam environments require the use of multi-level trigger systems to reject colossal background rates in real time. In analyses like the search for the Higgs boson, there is a need to maximize the signal acceptance by combining multiple different trigger chains when forming the offline data sample. In such statistically limited searches, datasets are often amassed over periods of several years, during which the trigger characteristics evolve and their performance can vary significantly. Reliable production cross-section measurements and upper limits must take into account a detailed understanding of the effective trigger inefficiency for every selected candidate event. We present as an example the complex situation of three trigger chains, based on missing energy and jet energy, to be combined in the context of the search for the Higgs (H) boson produced in association with a W boson at the Collider Detector at Fermilab (CDF). We briefly review the existing techniques for combining triggers, namely the inclusion, division, and exclusion methods. We introduce and describe a novel fourth in situ method whereby, for each candidate event, only the trigger chain with the highest a priori probability of selecting the event is considered. The in situ combination method has advantages of scalability to large numbers of differing trigger chains and of insensitivity to correlations between triggers. We compare the inclusion and in situ methods for signal event yields in the CDF WH search.

  6. Alternative ceramic circuit constructions for low cost, high reliability applications

    International Nuclear Information System (INIS)

    Modes, Ch.; O'Neil, M.

    1997-01-01

    The growth in the use of hybrid circuit technology has recently been challenged by recent advances in low cost laminate technology, as well as the continued integration of functions into IC's. Size reduction of hybrid 'packages' has turned out to be a means to extend the useful life of this technology. The suppliers of thick film materials technology have responded to this challenge by developing a number of technology options to reduce circuit size, increase density, and reduce overall cost, while maintaining or increasing reliability. This paper provides an overview of the processes that have been developed, and, in many cases are used widely to produce low cost, reliable microcircuits. Comparisons of each of these circuit fabrication processes are made with a discussion of advantages and disadvantages of each technology. (author)

  7. DUAL-PROCESS, a highly reliable process control system

    International Nuclear Information System (INIS)

    Buerger, L.; Gossanyi, A.; Parkanyi, T.; Szabo, G.; Vegh, E.

    1983-02-01

    A multiprocessor process control system is described. During its development the reliability was the most important aspect because it is used in the computerized control of a 5 MW research reactor. DUAL-PROCESS is fully compatible with the earlier single processor control system PROCESS-24K. The paper deals in detail with the communication, synchronization, error detection and error recovery problems of the operating system. (author)

  8. Trends of HVDC technology - highly reliable converting equipment

    International Nuclear Information System (INIS)

    Muraoka, Yasuo; Kato, Yasushi; Watanabe, Atsumi; Kano, Takashi; Kawai, Tadao

    1983-01-01

    At present, the DC power transmission in Japan is practically used for the system connection of relatively small capacity, and the reliability of AC-DC converting system has been proven to exceed the world level by the operational results. However, when the application of this system to trunk power transmission of large capacity in future is considered, it is desirable to raise the reliability of converting equipment further and to develop the stabilized control techniques in harmony with connected AC system. Hitachi Ltd. has developed diversified system-related technologies centering around DC power transmission and the techniques for raising the reliability of converting equipment tending to large capacity. In this report, the results and the future prospect are described. The recent trend of DC power transmission, the development of DC power transmission technology such as the simulation analysis, the stable operation of a DC system connected to a weak AC system and the DC independent transmission from nuclear power stations, the technical development of light direct ignition thyristor bulbs and control protection equipment are reported. (Kako, I.)

  9. Consistent high clinical pregnancy rates and low ovarian hyperstimulation syndrome rates in high-risk patients after GnRH agonist triggering and modified luteal support

    DEFF Research Database (Denmark)

    Iliodromiti, Stamatina; Blockeel, Christophe; Tremellen, Kelton P

    2013-01-01

    Are clinical pregnancy rates satisfactory and the incidence of OHSS low after GnRH agonist trigger and modified intensive luteal support in patients with a high risk of ovarian hyperstimulation syndrome (OHSS)?......Are clinical pregnancy rates satisfactory and the incidence of OHSS low after GnRH agonist trigger and modified intensive luteal support in patients with a high risk of ovarian hyperstimulation syndrome (OHSS)?...

  10. Online Reconstruction and Calibration with Feedback Loop in the ALICE High Level Trigger

    Directory of Open Access Journals (Sweden)

    Rohr David

    2016-01-01

    at the Large Hadron Collider (LHC at CERN. The High Level Trigger (HLT is an online computing farm, which reconstructs events recorded by the ALICE detector in real-time. The most computing-intensive task is the reconstruction of the particle trajectories. The main tracking devices in ALICE are the Time Projection Chamber (TPC and the Inner Tracking System (ITS. The HLT uses a fast GPU-accelerated algorithm for the TPC tracking based on the Cellular Automaton principle and the Kalman filter. ALICE employs gaseous subdetectors which are sensitive to environmental conditions such as ambient pressure and temperature and the TPC is one of these. A precise reconstruction of particle trajectories requires the calibration of these detectors. As our first topic, we present some recent optimizations to our GPU-based TPC tracking using the new GPU models we employ for the ongoing and upcoming data taking period at LHC. We also show our new approach to fast ITS standalone tracking. As our second topic, we present improvements to the HLT for facilitating online reconstruction including a new flat data model and a new data flow chain. The calibration output is fed back to the reconstruction components of the HLT via a feedback loop. We conclude with an analysis of a first online calibration test under real conditions during the Pb-Pb run in November 2015, which was based on these new features.

  11. Coronary CT angiography using prospective ECG triggering. High diagnostic accuracy with low radiation dose

    International Nuclear Information System (INIS)

    Arnoldi, E.; Ramos-Duran, L.; Abro, J.A.; Costello, P.; Zwerner, P.L.; Schoepf, U.J.; Nikolaou, K.; Reiser, M.F.

    2010-01-01

    The purpose of this study was to evaluate the diagnostic performance of coronary CT angiography (coronary CTA) using prospective ECG triggering (PT) for the detection of significant coronary artery stenosis compared to invasive coronary angiography (ICA). A total of 20 patients underwent coronary CTA with PT using a 128-slice CT scanner (Definition trademark AS+, Siemens) and ICA. All coronary CTA studies were evaluated for significant coronary artery stenoses (≥50% luminal narrowing) by 2 observers in consensus using the AHA-15-segment model. Findings in CTA were compared to those in ICA. Coronary CTA using PT had 88% sensitivity in comparison to 100% with ICA, 95% to 88% specificity, 80% to 92% positive predictive value and 97% to 100% negative predictive value for diagnosing significant coronary artery stenosis on per segment per patient analysis, respectively. Mean effective radiation dose-equivalent of CTA was 2.6±1 mSv. Coronary CTA using PT enables non-invasive diagnosis of significant coronary artery stenosis with high diagnostic accuracy in comparison to ICA and is associated with comparably low radiation exposure. (orig.) [de

  12. Identified particle yield associated with a high-$p_T$ trigger particle at the LHC

    CERN Document Server

    Veldhoen, Misha; van Leeuwen, Marco

    Identified particle production ratios are important observables, used to constrain models of particle production in heavy-ion collisions. Measurements of the inclusive particle ratio in central heavy-ion collisions showed an increase of the baryon-to-meson ratio compared to proton-proton collisions at intermediate pT, the so-called baryon anomaly. One possible explanation of the baryon anomaly is that partons from the thermalized deconfined QCD matter hadronize in a different way compared to hadrons produced in a vacuum jet. In this work we extend on previous measurements by measuring particle ratios in the yield associated with a high-pT trigger particle. These measurements can potentially further constrain the models of particle production since they are sensitive to the difference between particles from a jet and particles that are produced in the bulk. We start by developing a particle identification method that uses both the specific energy loss of a particle and the time of flight. From there, we presen...

  13. The ATLAS Data Acquisition and High Level Trigger Systems: Experience and Upgrade Plans

    CERN Document Server

    Hauser, R; The ATLAS collaboration

    2012-01-01

    The ATLAS DAQ/HLT system reduces the Level 1 rate of 75 kHz to a few kHz event build rate after Level 2 and a few hundred Hz out output rate to disk. It has operated with an average data taking efficiency of about 94% during the recent years. The performance has far exceeded the initial requirements, with about 5 kHz event building rate and 500 Hz of output rate in 2012, driven mostly by physics requirements. Several improvements and upgrades are foreseen in the upcoming long shutdowns, both to simplify the existing architecture and improve the performance. On the network side new core switches will be deployed and possible use of 10GBit Ethernet links for critical areas is foreseen. An improved read-out system to replace the existing solution based on PCI is under development. A major evolution of the high level trigger system foresees a merging of the Level 2 and Event Filter functionality on a single node, including the event building. This will represent a big simplification of the existing system, while ...

  14. Submarine landslides triggered by destabilization of high-saturation hydrate anomalies

    Science.gov (United States)

    Handwerger, Alexander L.; Rempel, Alan W.; Skarbek, Rob M.

    2017-07-01

    Submarine landslides occur along continental margins at depths that often intersect the gas hydrate stability zone, prompting suggestions that slope stability may be affected by perturbations that arise from changes in hydrate stability. Here we develop a numerical model to identify the conditions under which the destabilization of hydrates results in slope failure. Specifically, we focus on high-saturation hydrate anomalies at fine-grained to coarse-grained stratigraphic boundaries that can transmit bridging stresses that decrease the effective stress at sediment contacts and disrupt normal sediment consolidation. We evaluate slope stability before and after hydrate destabilization. Hydrate anomalies act to significantly increase the overall slope stability due to large increases in effective cohesion. However, when hydrate anomalies destabilize there is a loss of cohesion and increase in effective stress that causes the sediment grains to rapidly consolidate and generate pore pressures that can either trigger immediate slope failure or weaken the surrounding sediment until the pore pressure diffuses away. In cases where failure does not occur, the sediment can remain weakened for months. In cases where failure does occur, we quantify landslide dynamics using a rate and state frictional model and find that landslides can display either slow or dynamic (i.e., catastrophic) motion depending on the rate-dependent properties, size of the stress perturbation, and the size of the slip patch relative to a critical nucleation length scale. Our results illustrate the fundamental mechanisms through which the destabilization of gas hydrates can pose a significant geohazard.

  15. ALICE high-level trigger readout and FPGA processing in Run 2

    Energy Technology Data Exchange (ETDEWEB)

    Engel, Heiko; Kebschull, Udo [IRI, Goethe-Universitaet Frankfurt (Germany); Collaboration: ALICE-Collaboration

    2016-07-01

    The ALICE experiment uses the optical Detector Data Link (DDL) protocol to connect the detectors to the computing clusters of Data Acquisition (DAQ) and High-Level Trigger (HLT). The interfaces between the clusters and the optical links are realized with FPGA boards. HLT has replaced all of its interface boards with the Common Read-Out Receiver Card (C-RORC) for Run 2. This enables the read-out of detectors at higher link rates and allows to extend the data pre-processing capabilities, like online cluster finding, already in the FPGA. The C-RORC is integrated transparently into the existing HLT data transport framework and the cluster monitoring and management infrastructure. The board is in use since the start of LHC Run 2 and all ALICE data from and to HLT as well as all data from the TPC and the TRD is handled by C-RORCs. This contribution gives an overview on the firmware and software status of the C-RORC in the HLT.

  16. Reliability and availability of high power proton accelerators

    International Nuclear Information System (INIS)

    Cho, Y.

    1999-01-01

    It has become increasingly important to address the issues of operational reliability and availability of an accelerator complex early in its design and construction phases. In this context, reliability addresses the mean time between failures and the failure rate, and availability takes into account the failure rate as well as the length of time required to repair the failure. Methods to reduce failure rates include reduction of the number of components and over-design of certain key components. Reduction of the on-line repair time can be achieved by judiciously designed hardware, quick-service spare systems and redundancy. In addition, provisions for easy inspection and maintainability are important for both reduction of the failure rate as well as reduction of the time to repair. The radiation safety exposure principle of ALARA (as low as reasonably achievable) is easier to comply with when easy inspection capability and easy maintainability are incorporated into the design. Discussions of past experience in improving accelerator availability, some recent developments, and potential R and D items are presented. (author)

  17. Highly reliable computer network for real time system

    International Nuclear Information System (INIS)

    Mohammed, F.A.; Omar, A.A.; Ayad, N.M.A.; Madkour, M.A.I.; Ibrahim, M.K.

    1988-01-01

    Many of computer networks have been studied different trends regarding the network architecture and the various protocols that govern data transfers and guarantee a reliable communication among all a hierarchical network structure has been proposed to provide a simple and inexpensive way for the realization of a reliable real-time computer network. In such architecture all computers in the same level are connected to a common serial channel through intelligent nodes that collectively control data transfers over the serial channel. This level of computer network can be considered as a local area computer network (LACN) that can be used in nuclear power plant control system since it has geographically dispersed subsystems. network expansion would be straight the common channel for each added computer (HOST). All the nodes are designed around a microprocessor chip to provide the required intelligence. The node can be divided into two sections namely a common section that interfaces with serial data channel and a private section to interface with the host computer. This part would naturally tend to have some variations in the hardware details to match the requirements of individual host computers. fig 7

  18. The design and simulated performance of a fast Level 1 track trigger for the ATLAS High Luminosity Upgrade

    CERN Document Server

    Martensson, Mikael; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment at the High Luminosity LHC will face a fivefold increase in the number of interactions per bunch crossing relative to the ongoing Run 2. This will require a proportional improvement in rejection power at the earliest levels of the detector trigger system, while preserving good signal efficiency. One critical aspect of this improvement will be the implementation of precise track reconstruction, through which sharper trigger turn-on curves can be achieved, and b-tagging and tau-tagging techniques can in principle be implemented. The challenge of such a project comes in the development of a fast, custom electronic device integrated in the hardware based first trigger level of the experiment. This article will discuss the requirements, architecture and projected performance of the system in terms of tracking, timing and physics, based on detailed simulations. Studies are carried out using data from the strip subsystem only or both strip and pixel subsystems.

  19. Distributed control and monitoring of high-level trigger processes on the LHCb online farm

    CERN Document Server

    Vannerem, P; Jost, B; Neufeld, N

    2003-01-01

    The on-line data taking of the LHCb experiment at the future LHC collider will be controlled by a fully integrated and distributed Experiment Control System (ECS). The ECS will supervise both the detector operation (DCS) and the trigger and data acquisition (DAQ) activities of the experiment. These tasks require a large distributed information management system. The aim of this paper is to show how the control and monitoring of software processes such as trigger algorithms are integrated in the ECS of LHCb.

  20. Validity and Reliability of the Academic Resilience Scale in Turkish High School

    Science.gov (United States)

    Kapikiran, Sahin

    2012-01-01

    The present study aims to determine the validity and reliability of the academic resilience scale in Turkish high school. The participances of the study includes 378 high school students in total (192 female and 186 male). A set of analyses were conducted in order to determine the validity and reliability of the study. Firstly, both exploratory…

  1. Modelling aluminium wire bond reliability in high power OMP devices

    NARCIS (Netherlands)

    Kregting, R.; Yuan, C.A.; Xiao, A.; Bruijn, F. de

    2011-01-01

    In a RF power application such as the OMP, the wires are subjected to high current (because of the high power) and high temperature (because of the heat from IC and joule-heating from the wire itself). Moreover, the wire shape is essential to the RF performance. Hence, the aluminium wire is

  2. Testing on a Large Scale Running the ATLAS Data Acquisition and High Level Trigger Software on 700 PC Nodes

    CERN Document Server

    Burckhart-Chromek, Doris; Adragna, P; Alexandrov, L; Amorim, A; Armstrong, S; Badescu, E; Baines, J T M; Barros, N; Beck, H P; Bee, C; Blair, R; Bogaerts, J A C; Bold, T; Bosman, M; Caprini, M; Caramarcu, C; Ciobotaru, M; Comune, G; Corso-Radu, A; Cranfield, R; Crone, G; Dawson, J; Della Pietra, M; Di Mattia, A; Dobinson, Robert W; Dobson, M; Dos Anjos, A; Dotti, A; Drake, G; Ellis, Nick; Ermoline, Y; Ertorer, E; Falciano, S; Ferrari, R; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Garitaonandia, H; Gaudio, G; George, S; Gesualdi-Mello, A; Gorini, B; Green, B; Haas, S; Haberichter, W N; Hadavand, H; Haeberli, C; Haller, J; Hansen, J; Hauser, R; Hillier, S J; Höcker, A; Hughes-Jones, R E; Joos, M; Kazarov, A; Kieft, G; Klous, S; Kohno, T; Kolos, S; Korcyl, K; Kordas, K; Kotov, V; Kugel, A; Landon, M; Lankford, A; Leahu, L; Leahu, M; Lehmann-Miotto, G; Le Vine, M J; Liu, W; Maeno, T; Männer, R; Mapelli, L; Martin, B; Masik, J; McLaren, R; Meessen, C; Meirosu, C; Mineev, M; Misiejuk, A; Morettini, P; Mornacchi, G; Müller, M; Garcia-Murillo, R; Nagasaka, Y; Negri, A; Padilla, C; Pasqualucci, E; Pauly, T; Perera, V; Petersen, J; Pope, B; Albuquerque-Portes, M; Pretzl, K; Prigent, D; Roda, C; Ryabov, Yu; Salvatore, D; Schiavi, C; Schlereth, J L; Scholtes, I; Sole-Segura, E; Seixas, M; Sloper, J; Soloviev, I; Spiwoks, R; Stamen, R; Stancu, S; Strong, S; Sushkov, S; Szymocha, T; Tapprogge, S; Teixeira-Dias, P; Torres, R; Touchard, F; Tremblet, L; Ünel, G; Van Wasen, J; Vandelli, W; Vaz-Gil-Lopes, L; Vermeulen, J C; von der Schmitt, H; Wengler, T; Werner, P; Wheeler, S; Wickens, F; Wiedenmann, W; Wiesmann, M; Wu, X; Yasu, Y; Yu, M; Zema, F; Zobernig, H; Computing In High Energy and Nuclear Physics

    2006-01-01

    The ATLAS Data Acquisition (DAQ) and High Level Trigger (HLT) software system will be comprised initially of 2000 PC nodes which take part in the control, event readout, second level trigger and event filter operations. This high number of PCs will only be purchased before data taking in 2007. The large CERN IT LXBATCH facility provided the opportunity to run in July 2005 online functionality tests over a period of 5 weeks on a stepwise increasing farm size from 100 up to 700 PC dual nodes. The interplay between the control and monitoring software with the event readout, event building and the trigger software has been exercised the first time as an integrated system on this large scale. New was also to run algorithms in the online environment for the trigger selection and in the event filter processing tasks on a larger scale. A mechanism has been developed to package the offline software together with the DAQ/HLT software and to distribute it via peer-to-peer software efficiently to this large pc cluster. T...

  3. Testing on a Large Scale running the ATLAS Data Acquisition and High Level Trigger Software on 700 PC Nodes

    CERN Document Server

    Burckhart-Chromek, Doris; Adragna, P; Albuquerque-Portes, M; Alexandrov, L; Amorim, A; Armstrong, S; Badescu, E; Baines, J T M; Barros, N; Beck, H P; Bee, C; Blair, R; Bogaerts, J A C; Bold, T; Bosman, M; Caprini, M; Caramarcu, C; Ciobotaru, M; Comune, G; Corso-Radu, A; Cranfield, R; Crone, G; Dawson, J; Della Pietra, M; Di Mattia, A; Dobinson, Robert W; Dobson, M; Dos Anjos, A; Dotti, A; Drake, G; Ellis, Nick; Ermoline, Y; Ertorer, E; Falciano, S; Ferrari, R; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Garcia-Murillo, R; Garitaonandia, H; Gaudio, G; George, S; Gesualdi-Mello, A; Gorini, B; Green, B; Haas, S; Haberichter, W N; Hadavand, H; Haeberli, C; Haller, J; Hansen, J; Hauser, R; Hillier, S J; Hughes-Jones, R E; Höcker, A; Joos, M; Kazarov, A; Kieft, G; Klous, S; Kohno, T; Kolos, S; Korcyl, K; Kordas, K; Kotov, V; Kugel, A; Landon, M; Lankford, A; Le Vine, M J; Leahu, L; Leahu, M; Lehmann-Miotto, G; Liu, W; Maeno, T; Mapelli, L; Martin, B; Masik, J; McLaren, R; Meessen, C; Meirosu, C; Mineev, M; Misiejuk, A; Morettini, P; Mornacchi, G; Männer, R; Müller, M; Nagasaka, Y; Negri, A; Padilla, C; Pasqualucci, E; Pauly, T; Perera, V; Petersen, J; Pope, B; Pretzl, K; Prigent, D; Roda, C; Ryabov, Yu; Salvatore, D; Schiavi, C; Schlereth, J L; Scholtes, I; Seixas, M; Sloper, J; Sole-Segura, E; Soloviev, I; Spiwoks, R; Stamen, R; Stancu, S; Strong, S; Sushkov, S; Szymocha, T; Tapprogge, S; Teixeira-Dias, P; Torres, R; Touchard, F; Tremblet, L; Van Wasen, J; Vandelli, W; Vaz-Gil-Lopes, L; Vermeulen, J C; Wengler, T; Werner, P; Wheeler, S; Wickens, F; Wiedenmann, W; Wiesmann, M; Wu, X; Yasu, Y; Yu, M; Zema, F; Zobernig, H; von der Schmitt, H; Ünel, G; Computing In High Energy and Nuclear Physics

    2006-01-01

    The ATLAS Data Acquisition (DAQ) and High Level Trigger (HLT) software system will be comprised initially of 2000 PC nodes which take part in the control, event readout, second level trigger and event filter operations. This high number of PCs will only be purchased before data taking in 2007. The large CERN IT LXBATCH facility provided the opportunity to run in July 2005 online functionality tests over a period of 5 weeks on a stepwise increasing farm size from 100 up to 700 PC dual nodes. The interplay between the control and monitoring software with the event readout, event building and the trigger software has been exercised the first time as an integrated system on this large scale. New was also to run algorithms in the online environment for the trigger selection and in the event filter processing tasks on a larger scale. A mechanism has been developed to package the offline software together with the DAQ/HLT software and to distribute it via peer-to-peer software efficiently to this large pc cluster. T...

  4. A proposed Drift Tubes-seeded muon track trigger for the CMS experiment at the High Luminosity-LHC

    CERN Document Server

    AUTHOR|(CDS)2070813; Lazzizzera, Ignazio; Vanini, Sara; Zotto, Pierluigi

    2016-01-01

    The LHC program at 13 and 14 TeV, after the observation of the candidate SM Higgs boson, will help clarify future subjects of study and shape the needed tools. Any upgrade of the LHC experiments for unprecedented luminosities, such as the High Luminosity-LHC ones, must then maintain the acceptance on electroweak processes that can lead to a detailed study of the properties of the candidate Higgs boson. The acceptance of the key lepton, photon and hadron triggers should be kept such that the overall physics acceptance, in particular for low-mass scale processes, can be the same as the one the experiments featured in 2012.In such a scenario, a new approach to early trigger implementation is needed. One of the major steps will be the inclusion of high-granularity tracking sub-detectors, such as the CMS Silicon Tracker, in taking the early trigger decision. This contribution can be crucial in several tasks, including the confirmation of triggers in other subsystems, and the improvement of the on-line momentum mea...

  5. Methods for qualification of highly reliable software - international procedure

    International Nuclear Information System (INIS)

    Kersken, M.

    1997-01-01

    Despite the advantages of computer-assisted safety technology, there still is some uneasyness to be observed with respect to the novel processes, resulting from absence of a body of generally accepted and uncontentious qualification guides (regulatory provisions, standards) for safety evaluation of the computer codes applied. Warranty of adequate protection of the population, operators or plant components is an essential aspect in this context, too - as it is in general with reliability and risk assessment of novel technology - so that, due to appropriate legislation still missing, there currently is a licensing risk involved in the introduction of digital safety systems. Nevertheless, there is some extent of agreement within the international community and utility operators about what standards and measures should be applied for qualification of software of relevance to plant safety. The standard IEC 880/IEC 86/ in particular, in its original version, or national documents based on this standard, are applied in all countries using or planning to install those systems. A novel supplement to this standard, document /IEC 96/, is in the process of finalization and defines the requirements to be met by modern methods of software engineering. (orig./DG) [de

  6. Trigger finger

    Science.gov (United States)

    ... digit; Trigger finger release; Locked finger; Digital flexor tenosynovitis ... cut or hand Yellow or green drainage from the cut Hand pain or discomfort Fever If your trigger finger returns, call your surgeon. You may need another surgery.

  7. M7--a high speed digital processor for second level trigger selections

    International Nuclear Information System (INIS)

    Droege, T.F.; Gaines, I.; Turner, K.J.

    1978-01-01

    A digital processor is described which reconstructs mass and momentum as a second-level trigger selection. The processor is a five-address, microprogramed, pipelined, ECL machine with simultaneous memory access to four operands which load two parallel multipliers and an ALU. Source data modules are extensions of the processor

  8. Design of robust reliable control for T-S fuzzy Markovian jumping delayed neutral type neural networks with probabilistic actuator faults and leakage delays: An event-triggered communication scheme.

    Science.gov (United States)

    Syed Ali, M; Vadivel, R; Saravanakumar, R

    2018-06-01

    This study examines the problem of robust reliable control for Takagi-Sugeno (T-S) fuzzy Markovian jumping delayed neural networks with probabilistic actuator faults and leakage terms. An event-triggered communication scheme. First, the randomly occurring actuator faults and their failures rates are governed by two sets of unrelated random variables satisfying certain probabilistic failures of every actuator, new type of distribution based event triggered fault model is proposed, which utilize the effect of transmission delay. Second, Takagi-Sugeno (T-S) fuzzy model is adopted for the neural networks and the randomness of actuators failures is modeled in a Markov jump model framework. Third, to guarantee the considered closed-loop system is exponential mean square stable with a prescribed reliable control performance, a Markov jump event-triggered scheme is designed in this paper, which is the main purpose of our study. Fourth, by constructing appropriate Lyapunov-Krasovskii functional, employing Newton-Leibniz formulation and integral inequalities, several delay-dependent criteria for the solvability of the addressed problem are derived. The obtained stability criteria are stated in terms of linear matrix inequalities (LMIs), which can be checked numerically using the effective LMI toolbox in MATLAB. Finally, numerical examples are given to illustrate the effectiveness and reduced conservatism of the proposed results over the existing ones, among them one example was supported by real-life application of the benchmark problem. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Fracture toughness and reliability in high-temperature structural ...

    Indian Academy of Sciences (India)

    Unknown

    advanced propulsion systems, such as gas turbine engines .... cing current commercial high strength SiC fibres such as. Nicalon ... of the polymer pyrolysis technique to produce CFCC. ... tural applications in aerospace, military, and industrial.

  10. Multidisciplinary Design Optimization for High Reliability and Robustness

    National Research Council Canada - National Science Library

    Grandhi, Ramana

    2005-01-01

    .... Over the last 3 years Wright State University has been applying analysis tools to predict the behavior of critical disciplines to produce highly robust torpedo designs using robust multi-disciplinary...

  11. Critical velocities for deflagration and detonation triggered by voids in a REBO high explosive

    Energy Technology Data Exchange (ETDEWEB)

    Herring, Stuart Davis [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Jensen, Niels G [Los Alamos National Laboratory

    2010-01-01

    The effects of circular voids on the shock sensitivity of a two-dimensional model high explosive crystal are considered. We simulate a piston impact using molecular dynamics simulations with a Reactive Empirical Bond Order (REBO) model potential for a sub-micron, sub-ns exothermic reaction in a diatomic molecular solid. The probability of initiating chemical reactions is found to rise more suddenly with increasing piston velocity for larger voids that collapse more deterministically. A void with radius as small as 10 nm reduces the minimum initiating velocity by a factor of 4. The transition at larger velocities to detonation is studied in a micron-long sample with a single void (and its periodic images). The reaction yield during the shock traversal increases rapidly with velocity, then becomes a prompt, reliable detonation. A void of radius 2.5 nm reduces the critical velocity by 10% from the perfect crystal. A Pop plot of the time-to-detonation at higher velocities shows a characteristic pressure dependence.

  12. Optimization of a PCRAM Chip for high-speed read and highly reliable reset operations

    Science.gov (United States)

    Li, Xiaoyun; Chen, Houpeng; Li, Xi; Wang, Qian; Fan, Xi; Hu, Jiajun; Lei, Yu; Zhang, Qi; Tian, Zhen; Song, Zhitang

    2016-10-01

    The widely used traditional Flash memory suffers from its performance limits such as its serious crosstalk problems, and increasing complexity of floating gate scaling. Phase change random access memory (PCRAM) becomes one of the most potential nonvolatile memories among the new memory techniques. In this paper, a 1M-bit PCRAM chip is designed based on the SMIC 40nm CMOS technology. Focusing on the read and write performance, two new circuits with high-speed read operation and highly reliable reset operation are proposed. The high-speed read circuit effectively reduces the reading time from 74ns to 40ns. The double-mode reset circuit improves the chip yield. This 1M-bit PCRAM chip has been simulated on cadence. After layout design is completed, the chip will be taped out for post-test.

  13. A Novel Highly Ionizing Particle Trigger using the ATLAS Transition Radiation Tracker

    CERN Document Server

    Penwell, J; The ATLAS collaboration

    2011-01-01

    The ATLAS Transition Radiation Tracker (TRT) is an important part of the experiment’s charged particle tracking system. It also provides the ability to discriminate electrons from pions efficiently using large signal amplitudes induced in the TRT straw tubes by transition radiation. This amplitude information can also be used to identify heavily ionizing particles, such as monopoles, or Q-balls, that traverse the straws. Because of their large ionization losses, these particles can range out before they reach the ATLAS calorimeter, making them difficult to identify by the experiment’s first level trigger. Much of this inefficiency could be regained by making use of a feature of the TRT electronics that allows fast access to information on whether large-amplitude signals were produced in regions of the detector. A modest upgrade to existing electronics could allow triggers sensitive to heavily ionizing particles at level-1 to be constructed by counting such large-amplitude signals in roads corresponding to...

  14. ATLAS High Level Calorimeter Trigger Software Performance for Cosmic Ray Events

    CERN Document Server

    Oliveira Damazio, Denis; The ATLAS collaboration

    2009-01-01

    The ATLAS detector is undergoing intense commissioning effort with cosmic rays preparing for the first LHC collisions next spring. Combined runs with all of the ATLAS subsystems are being taken in order to evaluate the detector performance. This is an unique opportunity also for the trigger system to be studied with different detector operation modes, such as different event rates and detector configuration. The ATLAS trigger starts with a hardware based system which tries to identify detector regions where interesting physics objects may be found (eg: large energy depositions in the calorimeter system). An approved event will be further processed by more complex software algorithms at the second level where detailed features are extracted (full detector granularity data for small portions of the detector is available). Events accepted at this level will be further processed at the so-called event filter level. Full detector data at full granularity is available for offline like processing with complete calib...

  15. An overview of the reliability prediction related aspects of high power IGBTs in wind power applications

    DEFF Research Database (Denmark)

    Busca, Christian; Teodorescu, Remus; Blaabjerg, Frede

    2011-01-01

    Reliability is becoming more and more important as the size and number of installed Wind Turbines (WTs) increases. Very high reliability is especially important for offshore WTs because the maintenance and repair of such WTs in case of failures can be very expensive. WT manufacturers need...

  16. 76 FR 72203 - Voltage Coordination on High Voltage Grids; Notice of Reliability Workshop Agenda

    Science.gov (United States)

    2011-11-22

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. AD12-5-000] Voltage Coordination on High Voltage Grids; Notice of Reliability Workshop Agenda As announced in the Notice of Staff..., from 9 a.m. to 4:30 p.m. to explore the interaction between voltage control, reliability, and economic...

  17. GPUs for real-time processing in HEP trigger systems (CHEP2013: 20. international conference on computing in high energy and nuclear physics)

    Energy Technology Data Exchange (ETDEWEB)

    Lamanna, G; Lamanna, G; Piandani, R [INFN, Pisa (Italy); Ammendola, R [INFN, Rome " Tor Vergata" (Italy); Bauce, M; Giagu, S; Messina, A [University, Rome " Sapienza" (Italy); Biagioni, A; Lonardo, A; Paolucci, P S; Rescigno, M; Simula, F; Vicini, P [INFN, Rome " Sapienza" (Italy); Fantechi, R [CERN, Geneve (Switzerland); Fiorini, M [University and INFN, Ferrara (Italy); Graverini, E; Pantaleo, F; Sozzi, M [University, Pisa (Italy)

    2014-06-11

    We describe a pilot project for the use of Graphics Processing Units (GPUs) for online triggering applications in High Energy Physics (HEP) experiments. Two major trends can be identified in the development of trigger and DAQ systems for HEP experiments: the massive use of general-purpose commodity systems such as commercial multicore PC farms for data acquisition, and the reduction of trigger levels implemented in hardware, towards a pure software selection system (trigger-less). The very innovative approach presented here aims at exploiting the parallel computing power of commercial GPUs to perform fast computations in software both at low- and high-level trigger stages. General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughputs, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming very attractive. We discuss in details the use of online parallel computing on GPUs for synchronous low-level trigger with fixed latency. In particular we show preliminary results on a first test in the NA62 experiment at CERN. The use of GPUs in high-level triggers is also considered, the ATLAS experiment (and in particular the muon trigger) at CERN will be taken as a study case of possible applications.

  18. GPUs for real-time processing in HEP trigger systems (CHEP2013: 20. international conference on computing in high energy and nuclear physics)

    International Nuclear Information System (INIS)

    Lamanna, G; Lamanna, G; Piandani, R; Tor Vergata (Italy))" data-affiliation=" (INFN, Rome Tor Vergata (Italy))" >Ammendola, R; Sapienza (Italy))" data-affiliation=" (University, Rome Sapienza (Italy))" >Bauce, M; Sapienza (Italy))" data-affiliation=" (University, Rome Sapienza (Italy))" >Giagu, S; Sapienza (Italy))" data-affiliation=" (University, Rome Sapienza (Italy))" >Messina, A; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Biagioni, A; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Lonardo, A; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Paolucci, P S; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Rescigno, M; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Simula, F; Sapienza (Italy))" data-affiliation=" (INFN, Rome Sapienza (Italy))" >Vicini, P; Fantechi, R; Fiorini, M; Graverini, E; Pantaleo, F; Sozzi, M

    2014-01-01

    We describe a pilot project for the use of Graphics Processing Units (GPUs) for online triggering applications in High Energy Physics (HEP) experiments. Two major trends can be identified in the development of trigger and DAQ systems for HEP experiments: the massive use of general-purpose commodity systems such as commercial multicore PC farms for data acquisition, and the reduction of trigger levels implemented in hardware, towards a pure software selection system (trigger-less). The very innovative approach presented here aims at exploiting the parallel computing power of commercial GPUs to perform fast computations in software both at low- and high-level trigger stages. General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughputs, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming very attractive. We discuss in details the use of online parallel computing on GPUs for synchronous low-level trigger with fixed latency. In particular we show preliminary results on a first test in the NA62 experiment at CERN. The use of GPUs in high-level triggers is also considered, the ATLAS experiment (and in particular the muon trigger) at CERN will be taken as a study case of possible applications.

  19. Leading Change: Transitioning the AFMS into a High Reliability Organization

    Science.gov (United States)

    2016-02-16

    HROs, such as commercial aviation and nuclear power plants, make safety the focus of their organizational culture . Healthcare must become a high...nuclear industry and commercial aviation are examples of successful HROs. They achieve their goal of near zero errors by maintaining a culture of... Aviation Safety Network declared 2012 “the safest year for air travel since 1945.” There was only one fatal crash for every 2.5 million flights, an

  20. High reliability EPI-base radiation hardened power transistor

    International Nuclear Information System (INIS)

    Clark, L.E.; Saltich, J.L.

    1978-01-01

    A high-voltage power transistor is described which is able to withstand fluences as high as 3 x 10 14 neutrons per square centimeter and still be able to operate satisfactorily. The collector may be made essentially half as thick and twice as heavily doped as normally and its base is made in two regions which together are essentially four times as thick as the normal power transistor base region. The base region has a heavily doped upper region and a lower region intermediate the upper heavily doped region and the collector. The doping in the intermediate region is as close to intrinsic as possible, in any event less than about 3 x 10 15 impurities per cubic centimeter. The second base region has small width in comparison to the first base region, the ratio of the first to the second being at least about 5 to 1. The base region having the upper heavily doped region and the intermediate or lower low doped region contributes to the higher breakdown voltage which the transistor is able to withstand. The high doping of the collector region essentially lowers that portion of the breakdown voltage achieved by the collector region. Accordingly, it is necessary to transfer certain of this breakdown capability to the base region and this is achieved by using the upper region of heavily doped and an intermediate or lower region of low doping

  1. Performance and reliability of TPE-2 device with pulsed high power source

    International Nuclear Information System (INIS)

    Sato, Y.; Takeda, S.; Kiyama, S.

    1987-01-01

    The performance and the reliability of TPE-2 device with pulsed high power sources are described. To obtain the stable high beta plasma, the reproducibility and the reliability of the pulsed power sources must be maintained. A new power crowbar system with high efficiency and the switches with low jitter time are adopted to the bank system. A monitor system which always watches the operational states of the switches is developed too, and applied for the fast rising capacitor banks of TPE-2 device. The reliable operation for the bank has been realized, based on the data of switch monitor system

  2. Pressurizer pump reliability analysis high flux isotope reactor

    International Nuclear Information System (INIS)

    Merryman, L.; Christie, B.

    1993-01-01

    During a prolonged outage from November 1986 to May 1990, numerous changes were made at the High Flux Isotope Reactor (HFIR). Some of these changes involved the pressurizer pumps. An analysis was performed to calculate the impact of these changes on the pressurizer system availability. The analysis showed that the availability of the pressurizer system dropped from essentially 100% to approximately 96%. The primary reason for the decrease in availability comes because off-site power grid disturbances sometimes result in a reactor trip with the present pressurizer pump configuration. Changes are being made to the present pressurizer pump configuration to regain some of the lost availability

  3. High-temperature brazing for reliable tungsten-CFC joints

    International Nuclear Information System (INIS)

    Koppitz, Th; Pintsuk, G; Reisgen, U; Remmel, J; Hirai, T; Sievering, R; Rojas, Y; Casalegno, V

    2007-01-01

    The joining of tungsten and carbon-based materials is demanding due to the incompatibility of their chemical and thermophysical properties. Direct joining is unfeasible by the reason of brittle tungsten carbide formation. High-temperature brazing has been investigated in order to find a suitable brazing filler metal (BFM) which successfully acts as an intermediary between the incompatible properties of the base materials. So far only low Cr-alloyed Cu-based BFMs provide the preferential combination of good wetting action on both materials, tolerable interface reactions, and a precipitation free braze joint. Attempts to implement a higher melting metal (e.g. Pd, Ti, Zr) as a BFM have failed up to now, because the formation of brittle precipitations and pores in the seam were inevitable. But the wide metallurgical complexity of this issue is regarded to offer further joining potential

  4. Architecture of high reliable control systems using complex software

    International Nuclear Information System (INIS)

    Tallec, M.

    1990-01-01

    The problems involved by the use of complex softwares in control systems that must insure a very high level of safety are examined. The first part makes a brief description of the prototype of PROSPER system. PROSPER means protection system for nuclear reactor with high performances. It has been installed on a French nuclear power plant at the beginnning of 1987 and has been continually working since that time. This prototype is realized on a multi-processors system. The processors communicate between themselves using interruptions and protected shared memories. On each processor, one or more protection algorithms are implemented. Those algorithms use data coming directly from the plant and, eventually, data computed by the other protection algorithms. Each processor makes its own acquisitions from the process and sends warning messages if some operating anomaly is detected. All algorithms are activated concurrently on an asynchronous way. The results are presented and the safety related problems are detailed. - The second part is about measurements' validation. First, we describe how the sensors' measurements will be used in a protection system. Then, a proposal for a method based on the techniques of artificial intelligence (expert systems and neural networks) is presented. - The last part is about the problems of architectures of systems including hardware and software: the different types of redundancies used till now and a proposition of a multi-processors architecture which uses an operating system that is able to manage several tasks implemented on different processors, which verifies the good operating of each of those tasks and of the related processors and which allows to carry on the operation of the system, even in a degraded manner when a failure has been detected are detailed [fr

  5. The Berg Balance Scale has high intra- and inter-rater reliability but absolute reliability varies across the scale: a systematic review.

    Science.gov (United States)

    Downs, Stephen; Marquez, Jodie; Chiarelli, Pauline

    2013-06-01

    What is the intra-rater and inter-rater relative reliability of the Berg Balance Scale? What is the absolute reliability of the Berg Balance Scale? Does the absolute reliability of the Berg Balance Scale vary across the scale? Systematic review with meta-analysis of reliability studies. Any clinical population that has undergone assessment with the Berg Balance Scale. Relative intra-rater reliability, relative inter-rater reliability, and absolute reliability. Eleven studies involving 668 participants were included in the review. The relative intrarater reliability of the Berg Balance Scale was high, with a pooled estimate of 0.98 (95% CI 0.97 to 0.99). Relative inter-rater reliability was also high, with a pooled estimate of 0.97 (95% CI 0.96 to 0.98). A ceiling effect of the Berg Balance Scale was evident for some participants. In the analysis of absolute reliability, all of the relevant studies had an average score of 20 or above on the 0 to 56 point Berg Balance Scale. The absolute reliability across this part of the scale, as measured by the minimal detectable change with 95% confidence, varied between 2.8 points and 6.6 points. The Berg Balance Scale has a higher absolute reliability when close to 56 points due to the ceiling effect. We identified no data that estimated the absolute reliability of the Berg Balance Scale among participants with a mean score below 20 out of 56. The Berg Balance Scale has acceptable reliability, although it might not detect modest, clinically important changes in balance in individual subjects. The review was only able to comment on the absolute reliability of the Berg Balance Scale among people with moderately poor to normal balance. Copyright © 2013 Australian Physiotherapy Association. Published by .. All rights reserved.

  6. Online Calibration of the TPC Drift Time in the ALICE High Level Trigger

    Science.gov (United States)

    Rohr, David; Krzewicki, Mikolaj; Zampolli, Chiara; Wiechula, Jens; Gorbunov, Sergey; Chauvin, Alex; Vorobyev, Ivan; Weber, Steffen; Schweda, Kai; Lindenstruth, Volker

    2017-06-01

    A Large Ion Collider Experiment (ALICE) is one of the four major experiments at the Large Hadron Collider (LHC) at CERN. The high level trigger (HLT) is a compute cluster, which reconstructs collisions as recorded by the ALICE detector in real-time. It employs a custom online data-transport framework to distribute data and workload among the compute nodes. ALICE employs subdetectors that are sensitive to environmental conditions such as pressure and temperature, e.g., the time projection chamber (TPC). A precise reconstruction of particle trajectories requires calibration of these detectors. Performing calibration in real time in the HLT improves the online reconstructions and renders certain offline calibration steps obsolete speeding up offline physics analysis. For LHC Run 3, starting in 2020 when data reduction will rely on reconstructed data, online calibration becomes a necessity. Reconstructed particle trajectories build the basis for the calibration making a fast online-tracking mandatory. The main detectors used for this purpose are the TPC and Inner Tracking System. Reconstructing the trajectories in the TPC is the most compute-intense step. We present several improvements to the ALICE HLT developed to facilitate online calibration. The main new development for online calibration is a wrapper that can run ALICE offline analysis and calibration tasks inside the HLT. In addition, we have added asynchronous processing capabilities to support long-running calibration tasks in the HLT framework, which runs event-synchronously otherwise. In order to improve the resiliency, an isolated process performs the asynchronous operations such that even a fatal error does not disturb data taking. We have complemented the original loop-free HLT chain with ZeroMQ data-transfer components. The ZeroMQ components facilitate a feedback loop that inserts the calibration result created at the end of the chain back into tracking components at the beginning of the chain, after a

  7. The source of X-rays and high-charged ions based on moderate power vacuum discharge with laser triggering

    Directory of Open Access Journals (Sweden)

    Alkhimova Mariya A.

    2015-06-01

    Full Text Available The source of X-ray radiation with the energy of quanta that may vary in the range hν = 1÷12 keV was developed for studies in X-ray interaction with matter and modification of solid surfaces. It was based on a vacuum spark discharge with the laser triggering. It was shown in our experiments that there is a possibility to adjust X-ray radiation spectrum by changing the configuration of the electrode system when the energy stored in the capacitor is varied within the range of 1÷17 J. A comprehensive study of X-ray imaging and quanta energy was carried out. These experiments were carried out for the case of both direct and reverse polarity of the voltage on the electrodes. Additionally, ion composition of plasma created in a laser-triggered vacuum discharge was analyzed. Highly charged ions Zn(+21, Cu(+20 and Fe(+18 were observed.

  8. Improved Yield, Performance and Reliability of High-Actuator-Count Deformable Mirrors, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The project team will conduct processing and design research aimed at improving yield, performance, and reliability of high-actuator-count micro-electro-mechanical...

  9. On the design of high-rise buildings with a specified level of reliability

    Science.gov (United States)

    Dolganov, Andrey; Kagan, Pavel

    2018-03-01

    High-rise buildings have a specificity, which significantly distinguishes them from traditional buildings of high-rise and multi-storey buildings. Steel structures in high-rise buildings are advisable to be used in earthquake-proof regions, since steel, due to its plasticity, provides damping of the kinetic energy of seismic impacts. These aspects should be taken into account when choosing a structural scheme of a high-rise building and designing load-bearing structures. Currently, modern regulatory documents do not quantify the reliability of structures. Although the problem of assigning an optimal level of reliability has existed for a long time. The article shows the possibility of designing metal structures of high-rise buildings with specified reliability. Currently, modern regulatory documents do not quantify the reliability of high-rise buildings. Although the problem of assigning an optimal level of reliability has existed for a long time. It is proposed to establish the value of reliability 0.99865 (3σ) for constructions of buildings and structures of a normal level of responsibility in calculations for the first group of limiting states. For increased (construction of high-rise buildings) and reduced levels of responsibility for the provision of load-bearing capacity, it is proposed to assign respectively 0.99997 (4σ) and 0.97725 (2σ). The coefficients of the use of the cross section of a metal beam for different levels of security are given.

  10. Impact of High-Reliability Education on Adverse Event Reporting by Registered Nurses.

    Science.gov (United States)

    McFarland, Diane M; Doucette, Jeffrey N

    Adverse event reporting is one strategy to identify risks and improve patient safety, but, historically, adverse events are underreported by registered nurses (RNs) because of fear of retribution and blame. A program was provided on high reliability to examine whether education would impact RNs' willingness to report adverse events. Although the findings were not statistically significant, they demonstrated a positive impact on adverse event reporting and support the need to create a culture of high reliability.

  11. A new high speed, Ultrascale+ based board for the ATLAS jet calorimeter trigger system

    CERN Document Server

    Rocco, Elena; The ATLAS collaboration

    2018-01-01

    To cope with the enhanced luminosity at the Large Hadron Collider (LHC) in 2021, the ATLAS collaboration is planning a major detector upgrade. As a part of this, the Level 1 trigger based on calorimeter data will be upgraded to exploit the fine granularity readout using a new system of Feature EXtractors (FEX), which each reconstruct different physics objects for the trigger selection. The jet FEX (jFEX) system is conceived to provide jet identification (including large area jets) and measurements of global variables within a latency budget of less then 400ns. It consists of 6 modules. A single jFEX module is an ATCA board with 4 large FPGAs of the Xilinx Ultrascale+ family, that can digest a total input data rate of ~3.6 Tb/s using up to 120 Multi Gigabit Transceiver (MGT), 24 electrical optical devices, board control and power on the mezzanines to allow flexibility in upgrading controls functions and components without affecting the main board. The 24-layers stack-up was carefully designed to preserve the s...

  12. Reliability Evaluation on Creep Life Prediction of Alloy 617 for a Very High Temperature Reactor

    International Nuclear Information System (INIS)

    Kim, Woo-Gon; Hong, Sung-Deok; Kim, Yong-Wan; Park, Jae-Young; Kim, Seon-Jin

    2012-01-01

    This paper evaluates the reliability of creep rupture life under service conditions of Alloy 617, which is considered as one of the candidate materials for use in a very high temperature reactor (VHTR) system. A Z-parameter, which represents the deviation of creep rupture data from the master curve, was used for the reliability analysis of the creep rupture data of Alloy 617. A Service-condition Creep Rupture Interference (SCRI) model, which can consider both the scattering of the creep rupture data and the fluctuations of temperature and stress under any service conditions, was also used for evaluating the reliability of creep rupture life. The statistical analysis showed that the scattering of creep rupture data based on Z-parameter was supported by normal distribution. The values of reliability decreased rapidly with increasing amplitudes of temperature and stress fluctuations. The results established that the reliability decreased with an increasing service time.

  13. Human reliability in high dose rate afterloading radiotherapy based on FMECA

    International Nuclear Information System (INIS)

    Deng Jun; Fan Yaohua; Yue Baorong; Wei Kedao; Ren Fuli

    2012-01-01

    Objective: To put forward reasonable and feasible recommendations against the procedure with relative high risk during the high dose rate (HDR) afterloading radiotherapy, so as to enhance its clinical application safety, through studying the human reliability in the process of carrying out the HDR afterloading radiotherapy. Methods: Basic data were collected by on-site investigation and process analysis as well as expert evaluation. Failure mode, effect and criticality analysis (FMECA) employed to study the human reliability in the execution of HDR afterloading radiotherapy. Results: The FMECA model of human reliability for HDR afterloading radiotherapy was established, through which 25 procedures with relative high risk index were found,accounting for 14.1% of total 177 procedures. Conclusions: FMECA method in human reliability study for HDR afterloading radiotherapy is feasible. The countermeasures are put forward to reduce the human error, so as to provide important basis for enhancing clinical application safety of HDR afterloading radiotherapy. (authors)

  14. Multi-Agent System based Event-Triggered Hybrid Controls for High-Security Hybrid Energy Generation Systems

    DEFF Research Database (Denmark)

    Dou, Chun-Xia; Yue, Dong; Guerrero, Josep M.

    2017-01-01

    This paper proposes multi-agent system based event- triggered hybrid controls for guaranteeing energy supply of a hybrid energy generation system with high security. First, a mul-ti-agent system is constituted by an upper-level central coordi-nated control agent combined with several lower......-level unit agents. Each lower-level unit agent is responsible for dealing with internal switching control and distributed dynamic regula-tion for its unit system. The upper-level agent implements coor-dinated switching control to guarantee the power supply of over-all system with high security. The internal...

  15. The design and simulated performance of a fast Level 1 track trigger for the ATLAS High Luminosity Upgrade

    CERN Document Server

    Martensson, Mikael; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment at the high-luminosity LHC will face a five-fold increase in the number of interactions per collision relative to the ongoing Run 2. This will require a proportional improvement in rejection power at the earliest levels of the detector trigger system, while preserving good signal efficiency. One critical aspect of this improvement will be the implementation of precise track reconstruction, through which sharper trigger turn-on curves can be achieved, and b-tagging and tau-tagging techniques can in principle be implemented. The challenge of such a project comes in the development of a fast, custom electronic device integrated in the hardware-based first trigger level of the experiment, with repercussions propagating as far as the detector read-out philosophy. This talk will discuss the requirements, architecture and projected performance of the system in terms of tracking, timing and physics, based on detailed simulations. Studies are carried out comparing two detector geometries and using...

  16. The role of high cycle fatigue (HCF) onset in Francis runner reliability

    International Nuclear Information System (INIS)

    Gagnon, M; Tahan, S A; Bocher, P; Thibault, D

    2012-01-01

    High Cycle Fatigue (HCF) plays an important role in Francis runner reliability. This paper presents a model in which reliability is defined as the probability of not exceeding a threshold above which HCF contributes to crack propagation. In the context of combined Low Cycle Fatigue (LCF) and HCF loading, the Kitagawa diagram is used as the limit state threshold for reliability. The reliability problem is solved using First-Order Reliability Methods (FORM). A study case is proposed using in situ measured strains and operational data. All the parameters of the reliability problem are based either on observed data or on typical design specifications. From the results obtained, we observed that the uncertainty around the defect size and the HCF stress range play an important role in reliability. At the same time, we observed that expected values for the LCF stress range and the number of LCF cycles have a significant influence on life assessment, but the uncertainty around these values could be neglected in the reliability assessment.

  17. A First-Level Muon Trigger Based on the ATLAS Muon Drift Tube Chambers With High Momentum Resolution for LHC Phase II

    CERN Document Server

    Richter, R; The ATLAS collaboration; Ott, S; Kortner, O; Fras, M; Gabrielyan, V; Danielyan, V; Fink, D; Nowak, S; Schwegler, P; Abovyan, S

    2014-01-01

    The Level-1 (L1) trigger for muons with high transverse momentum (pT) in ATLAS is based on chambers with excellent time resolution, able to identify muons coming from a particular beam crossing. These trigger chambers also provide a fast pT-measurement of the muons, the accuracy of the measurement being limited by the moderate spatial resolution of the chambers along the deflecting direction of the magnetic field (eta-coordinate). The higher luminosity foreseen for Phase-II puts stringent limits on the L1 trigger rates, and a way to control these rates would be to improve the spatial resolution of the triggering system, drastically sharpening the turn-on curve of the L1 trigger. To do this, the precision tracking chambers (MDT) can be used in the L1 trigger, provided the corresponding trigger latency is increased as foreseen. The trigger rate reduction is accomplished by strongly decreasing the rate of triggers from muons with pT lower than a predefined threshold (typically 20 GeV), which would otherwise trig...

  18. The design and performance of the ATLAS Inner Detector trigger in high pileup collisions at 13 TeV at the Large Hadron Collider

    CERN Document Server

    Sotiropoulou, Calliope Louisa; The ATLAS collaboration

    2017-01-01

    The design and performance of the ATLAS Inner Detector (ID) trigger algorithms running online on the high level trigger (HLT) processor farm for 13 TeV LHC collision data with high pileup are discussed. The HLT ID tracking is a vital component in all physics signatures in the ATLAS Trigger for the precise selection of the rare or interesting events necessary for physics analysis without overwhelming the offine data storage in terms of both size and rate. To cope with the high expected interaction rates in the 13 TeV LHC collisions the ID trigger was redesigned during the 2013-15 long shutdown. The performance of the ID Trigger in the 2016 data from 13 TeV LHC collisions has been excellent and exceeded expectations as the interaction multiplicity increased throughout the year. The detailed efficiencies and resolutions of the trigger in a wide range of physics signatures are presented, to demonstrate how the trigger responded well under the extreme pileup conditions. The performance of the ID Trigger algorithms...

  19. The design and performance of the ATLAS Inner Detector trigger in high pileup collisions at 13 TeV at the Large Hadron Collider

    CERN Document Server

    Kilby, Callum; The ATLAS collaboration

    2017-01-01

    The design and performance of the ATLAS Inner Detector (ID) trigger algorithms running online on the high level trigger (HLT) processor farm for 13 TeV LHC collision data with high pileup are discussed. The HLT ID tracking is a vital component in all physics signatures in the ATLAS Trigger for the precise selection of the rare or interesting events necessary for physics analysis without overwhelming the offline data storage in terms of both size and rate. To cope with the high expected interaction rates in the 13 TeV LHC collisions the ID trigger was redesigned during the 2013-15 long shutdown. The performance of the ID Trigger in the 2016 data from 13 TeV LHC collisions has been excellent and exceeded expectations as the interaction multiplicity increased throughout the year. The detailed efficiencies and resolutions of the trigger in a wide range of physics signatures are presented, to demonstrate how the trigger responded well under the extreme pileup conditions. The performance of the ID Trigger algorithm...

  20. Implementation of a level 1 trigger system using high speed serial (VXS) techniques for the 12GeV high luminosity experimental programs at Thomas Jefferson National Accelerator Facility

    International Nuclear Information System (INIS)

    Cuevas, C.; Raydo, B.; Dong, H.; Gupta, A.; Barbosa, F.J.; Wilson, J.; Taylor, W.M.; Jastrzembski, E.; Abbott, D.

    2009-01-01

    We will demonstrate a hardware and firmware solution for a complete fully pipelined multi-crate trigger system that takes advantage of the elegant high speed VXS serial extensions for VME. This trigger system includes three sections starting with the front end crate trigger processor (CTP), a global Sub-System Processor (SSP) and a Trigger Supervisor that manages the timing, synchronization and front end event readout. Within a front end crate, trigger information is gathered from each 16 Channel, 12 bit Flash ADC module at 4 nS intervals via the VXS backplane, to a Crate Trigger Processor (CTP). Each Crate Trigger Processor receives these 500 MB/S VXS links from the 16 FADC-250 modules, aligns skewed data inherent of Aurora protocol, and performs real time crate level trigger algorithms. The algorithm results are encoded using a Reed-Solomon technique and transmission of this Level 1 trigger data is sent to the SSP using a multi-fiber link. The multi-fiber link achieves an aggregate trigger data transfer rate to the global trigger at 8 Gb/s. The SSP receives and decodes Reed-Solomon error correcting transmission from each crate, aligns the data, and performs the global level trigger algorithms. The entire trigger system is synchronous and operates at 250 MHz with the Trigger Supervisor managing not only the front end event readout, but also the distribution of the critical timing clocks, synchronization signals, and the global trigger signals to each front end readout crate. These signals are distributed to the front end crates on a separate fiber link and each crate is synchronized using a unique encoding scheme to guarantee that each front end crate is synchronous with a fixed latency, independent of the distance between each crate. The overall trigger signal latency is <3 uS, and the proposed 12GeV experiments at Jefferson Lab require up to 200KHz Level 1 trigger rate.

  1. Improving patient safety: patient-focused, high-reliability team training.

    Science.gov (United States)

    McKeon, Leslie M; Cunningham, Patricia D; Oswaks, Jill S Detty

    2009-01-01

    Healthcare systems are recognizing "human factor" flaws that result in adverse outcomes. Nurses work around system failures, although increasing healthcare complexity makes this harder to do without risk of error. Aviation and military organizations achieve ultrasafe outcomes through high-reliability practice. We describe how reliability principles were used to teach nurses to improve patient safety at the front line of care. Outcomes include safety-oriented, teamwork communication competency; reflections on safety culture and clinical leadership are discussed.

  2. Reliability of a Computerized Neurocognitive Test in Baseline Concussion Testing of High School Athletes.

    Science.gov (United States)

    MacDonald, James; Duerson, Drew

    2015-07-01

    Baseline assessments using computerized neurocognitive tests are frequently used in the management of sport-related concussions. Such testing is often done on an annual basis in a community setting. Reliability is a fundamental test characteristic that should be established for such tests. Our study examined the test-retest reliability of a computerized neurocognitive test in high school athletes over 1 year. Repeated measures design. Two American high schools. High school athletes (N = 117) participating in American football or soccer during the 2011-2012 and 2012-2013 academic years. All study participants completed 2 baseline computerized neurocognitive tests taken 1 year apart at their respective schools. The test measures performance on 4 cognitive tasks: identification speed (Attention), detection speed (Processing Speed), one card learning accuracy (Learning), and one back speed (Working Memory). Reliability was assessed by measuring the intraclass correlation coefficient (ICC) between the repeated measures of the 4 cognitive tasks. Pearson and Spearman correlation coefficients were calculated as a secondary outcome measure. The measure for identification speed performed best (ICC = 0.672; 95% confidence interval, 0.559-0.760) and the measure for one card learning accuracy performed worst (ICC = 0.401; 95% confidence interval, 0.237-0.542). All tests had marginal or low reliability. In a population of high school athletes, computerized neurocognitive testing performed in a community setting demonstrated low to marginal test-retest reliability on baseline assessments 1 year apart. Further investigation should focus on (1) improving the reliability of individual tasks tested, (2) controlling for external factors that might affect test performance, and (3) identifying the ideal time interval to repeat baseline testing in high school athletes. Computerized neurocognitive tests are used frequently in high school athletes, often within a model of baseline testing

  3. Schmitt-Trigger-based Recycling Sensor and Robust and High-Quality PUFs for Counterfeit IC Detection

    OpenAIRE

    Lin, Cheng-Wei; Jang, Jae-Won; Ghosh, Swaroop

    2015-01-01

    We propose Schmitt-Trigger (ST) based recycling sensor that are tailored to amplify the aging mechanisms and detect fine grained recycling (minutes to seconds). We exploit the susceptibility of ST to process variations to realize high-quality arbiter PUF. Conventional SRAM PUF suffer from environmental fluctuation-induced bit flipping. We propose 8T SRAM PUF with a back-to-back PMOS latch to improve robustness by 4X. We also propose a low-power 7T SRAM with embedded Magnetic Tunnel Junction (...

  4. Dynamic functional coupling of high resolution EEG potentials related to unilateral internally triggered one-digit movements.

    Science.gov (United States)

    Urbano, A; Babiloni, C; Onorati, P; Babiloni, F

    1998-06-01

    Between-electrode cross-covariances of delta (0-3 Hz)- and theta (4-7 Hz)-filtered high resolution EEG potentials related to preparation, initiation. and execution of human unilateral internally triggered one-digit movements were computed to investigate statistical dynamic coupling between these potentials. Significant (P planning, starting, and performance of unilateral movement. The involvement of these cortical areas is supported by the observation that averaged spatially enhanced delta- and theta-bandpassed potentials were computed from the scalp regions where task-related electrical activation of primary sensorimotor areas and supplementary motor area was roughly represented.

  5. Reliability of an Automated High-Resolution Manometry Analysis Program across Expert Users, Novice Users, and Speech-Language Pathologists

    Science.gov (United States)

    Jones, Corinne A.; Hoffman, Matthew R.; Geng, Zhixian; Abdelhalim, Suzan M.; Jiang, Jack J.; McCulloch, Timothy M.

    2014-01-01

    Purpose: The purpose of this study was to investigate inter- and intrarater reliability among expert users, novice users, and speech-language pathologists with a semiautomated high-resolution manometry analysis program. We hypothesized that all users would have high intrarater reliability and high interrater reliability. Method: Three expert…

  6. The design and performance of the ATLAS Inner Detector trigger in high pileup collisions at 13 TeV at the Large Hadron Collider

    CERN Document Server

    Grandi, Mario; The ATLAS collaboration

    2018-01-01

    The design and performance of the ATLAS Inner Detector (ID) trigger algorithms running online on the High Level Trigger (HLT) processor farm for 13 TeV LHC collision data with high pileup are discussed. The HLT ID tracking is a vital component in all physics signatures in the ATLAS Trigger for the precise selection of the rare or interesting events necessary for physics analysis without overwhelming the offline data storage in terms of both size and rate. To cope with the high interaction rates expected in the 13 TeV LHC collisions the ID trigger was redesigned during the 2013-15 long shutdown. The performance of the ID Trigger in both the 2016 and 2017 data from 13 TeV LHC collisions has been excellent and exceeded expectations, even at the very high interaction multiplicities observed at the end of data taking in 2017. The detailed efficiencies and resolutions of the trigger in a wide range of physics signatures are presented for the Run 2 data, illustrating the superb performance of the ID trigger algorith...

  7. Upgrade of the ATLAS detectors and trigger at the High Luminosity LHC: tracking and timing for pile-up suppression

    CERN Document Server

    Testa, Marianna; The ATLAS collaboration

    2018-01-01

    The High Luminosity-Large Hadron Collider  is expected to start data-taking in 2026 and to provide an integrated luminosity of 3000 fb-1, giving a factor 10 more data than will be collected by 2023. This high statistics will make it possible to perform precise measurements in the Higgs sector and improve searches of new physics at the TeV scale. The luminosity is expected to be 7.5 ×1034 cm-2 s-1, corresponding to about 200 proton-proton pile-up interactions, which will increase the rates at each level of the trigger and degrade the reconstruction performance. To cope with such a harsh environment some sub-detectors of the ATLAS experiment will be upgraded or completely substituted and the Trigger-DAQ system will be upgraded. In this talk an overview of two new sub-detectors enabling powerful pile-up suppression, a new Inner Tracker and a proposed High Granularity Timing Detector, will be given, describing the two technologies, their performance, and their interplay. Emphasis will also be given to the possi...

  8. Upgrade of the ATLAS detectors and trigger at the High Luminosity LHC: tracking and timing for pile-up suppression

    CERN Document Server

    Testa, Marianna; The ATLAS collaboration

    2018-01-01

    The High Luminosity-Large Hadron Collider is expected to start data-taking in 2026 and to provide an integrated luminosity of 3000 fb^{-1}, giving a factor 10 more data than will be collected by 2023. This high statistics will make it possible to perform precise measurements in the Higgs sector and improve searches of new physics at the TeV scale. The luminosity is expected to be 7.5 \\times 10^{34} cm^{-2} s^{-1}, corresponding to about 200 proton-proton pile-up interactions, which will increase the rates at each level of the trigger and degrade the reconstruction performance. To cope with such a harsh environment some sub-detectors of the ATLAS experiment will be upgraded or completely substituted and the Trigger-DAQ system will be upgraded. In this talk an overview of two new sub-detectors enabling powerful pile-up suppression, a new Inner Tracker and a proposed High Granularity Timing Detector, will be given, describing the two technologies, their performance, and their interplay. Emphasis will also be giv...

  9. Trigger Finger

    Science.gov (United States)

    ... in a bent position. People whose work or hobbies require repetitive gripping actions are at higher risk ... developing trigger finger include: Repeated gripping. Occupations and hobbies that involve repetitive hand use and prolonged gripping ...

  10. [Employees in high-reliability organizations: systematic selection of personnel as a final criterion].

    Science.gov (United States)

    Oubaid, V; Anheuser, P

    2014-05-01

    Employees represent an important safety factor in high-reliability organizations. The combination of clear organizational structures, a nonpunitive safety culture, and psychological personnel selection guarantee a high level of safety. The cockpit personnel selection process of a major German airline is presented in order to demonstrate a possible transferability into medicine and urology.

  11. Trigger circuit

    International Nuclear Information System (INIS)

    Verity, P.R.; Chaplain, M.D.; Turner, G.D.J.

    1984-01-01

    A monostable trigger circuit comprises transistors TR2 and TR3 arranged with their collectors and bases interconnected. The collector of the transistor TR2 is connected to the base of transistor TR3 via a capacitor C2 the main current path of a grounded base transistor TR1 and resistive means R2,R3. The collector of transistor TR3 is connected to the base of transistor TR2 via resistive means R6, R7. In the stable state all the transistors are OFF, the capacitor C2 is charged, and the output is LOW. A positive pulse input to the base of TR2 switches it ON, which in turn lowers the voltage at points A and B and so switches TR1 ON so that C2 can discharge via R2, R3, which in turn switches TR3 ON making the output high. Thus all three transistors are latched ON. When C2 has discharged sufficiently TR1 switches OFF, followed by TR3 (making the output low again) and TR2. The components C1, C3 and R4 serve to reduce noise, and the diode D1 is optional. (author)

  12. Management systems for high reliability organizations. Integration and effectiveness; Managementsysteme fuer Hochzuverlaessigkeitsorganisationen. Integration und Wirksamkeit

    Energy Technology Data Exchange (ETDEWEB)

    Mayer, Michael

    2015-03-09

    The scope of the thesis is the development of a method for improvement of efficient integrated management systems for high reliability organizations (HRO). A comprehensive analysis of severe accident prevention is performed. Severe accident management, mitigation measures and business continuity management are not included. High reliability organizations are complex and potentially dynamic organization forms that can be inherently dangerous like nuclear power plants, offshore platforms, chemical facilities, large ships or large aircrafts. A recursive generic management system model (RGM) was development based on the following factors: systemic and cybernetic Asepcts; integration of different management fields, high decision quality, integration of efficient methods of safety and risk analysis, integration of human reliability aspects, effectiveness evaluation and improvement.

  13. Improvement of the reliability and efficiency of the NPP operation by training high-skilled personnel

    International Nuclear Information System (INIS)

    Korolev, V.V.; Sereda, G.A.

    1981-01-01

    Nuclear power at the modern stage of development is characterized by utilization of large power commercial reactors high technical specifications and economic parameters of which can be provided only at high reliability of the equipment and high qualification of the personnel. Special educational institution - Institute of nuclear engineering is organized for education of the operation personnel of high qualification for modern NPPs. Discussed are some problems occurring in this connection [ru

  14. Purinergic signaling triggers endfoot high-amplitude Ca2+ signals and causes inversion of neurovascular coupling after subarachnoid hemorrhage.

    Science.gov (United States)

    Pappas, Anthony C; Koide, Masayo; Wellman, George C

    2016-11-01

    Neurovascular coupling supports brain metabolism by matching focal increases in neuronal activity with local arteriolar dilation. Previously, we demonstrated that an emergence of spontaneous endfoot high-amplitude Ca 2+ signals (eHACSs) caused a pathologic shift in neurovascular coupling from vasodilation to vasoconstriction in brain slices obtained from subarachnoid hemorrhage model animals. Extracellular purine nucleotides (e.g., ATP) can trigger astrocyte Ca 2+ oscillations and may be elevated following subarachnoid hemorrhage. Here, the role of purinergic signaling in subarachnoid hemorrhage-induced eHACSs and inversion of neurovascular coupling was examined by imaging parenchymal arteriolar diameter and astrocyte Ca 2+ signals in rat brain slices using two-photon fluorescent and infrared-differential interference contrast microscopy. We report that broad-spectrum inhibition of purinergic (P2) receptors using suramin blocked eHACSs and restored vasodilatory neurovascular coupling after subarachnoid hemorrhage. Importantly, eHACSs were also abolished using a cocktail of inhibitors targeting G q -coupled P2Y receptors. Further, activation of P2Y receptors in brain slices from un-operated animals triggered high-amplitude Ca 2+ events resembling eHACSs and disrupted neurovascular coupling. Neither tetrodotoxin nor bafilomycin A1 affected eHACSs suggesting that purine nucleotides are not released by ongoing neurotransmission and/or vesicular release after subarachnoid hemorrhage. These results indicate that purinergic signaling via P2Y receptors contributes to subarachnoid hemorrhage-induced eHACSs and inversion of neurovascular coupling. © The Author(s) 2016.

  15. Implosion lessons from national security, high reliability spacecraft, electronics, and the forces which changed them

    CERN Document Server

    Temple, L Parker

    2012-01-01

    Implosion is a focused study of the history and uses of high-reliability, solid-state electronics, military standards, and space systems that support our national security and defense. This book is unique in combining the interdependent evolution of and interrelationships among military standards, solid-state electronics, and very high-reliability space systems. Starting with a brief description of the physics that enabled the development of the first transistor, Implosion covers the need for standardizing military electronics, which began during World War II and continu

  16. Development of high-reliability control system for nuclear power plants

    International Nuclear Information System (INIS)

    Asami, K.; Yanai, K.; Hirose, H.; Ito, T.

    1983-01-01

    In Japan, many nuclear power generating plants are in operation and under construction. There is a general awareness of the problems in connection with nuclear power generation and strong emphasis is put on achieving highly reliable operation of nuclear power plants. Hitachi has developed a new high-reliability control system. NURECS-3000 (NUclear Power Plant High-REliability Control System), which is applied to the main control systems, such as the reactor feedwater control system, the reactor recirculation control system and the main turbine control system. The NURECS-3000 system was designed taking into account the fact that there will be failures, but the aim is for the system to continue to function correctly; it is therefore a fault-tolerant system. It has redundant components which can be completely isolated from each other in order to prevent fault propagation. The system has a hierarchical configuration, with a main controller, consisting of a triplex microcomputer system, and sub-loop controllers. Special care was taken to ensure the independence of these subsystems. Since most of the redundant system failures are caused by common-mode failures and the reliability of redundant systems depends on the reliability of the common-mode parts, the aim was to minimize these parts. (author)

  17. Receiver system for radio observation of high-energy cosmic ray air showers and its behaviour in self trigger mode

    International Nuclear Information System (INIS)

    Kroemer, Oliver

    2008-04-01

    The observation of high-energy cosmic rays is carried out by indirect measurements. Thereby the primary cosmic particle enters into the earth's atmosphere and generates a cosmic ray air shower by interactions with the air molecules. The secondary particles arriving at ground level are detected with particle detector arrays. The fluorescence light from the exited nitrogen molecules along the shower axis is observed with reflector telescopes in the near-ultraviolet range. In addition to these well-established detection methods, the radio observation of the geosynchrotron emission from cosmic ray air showers is investigated at present as a new observation method. Geosynchrotron emission is generated by the acceleration of the relativistic electron-positron-pairs contained in the air shower by Lorentz forces in the earth's magnetic field. At ground level this causes a single pulse of the electric field strength with a continuous frequency spectrum ranging from a few MHz to above 100 MHz. In this work, a suitable receiver concept is developed based on the signal properties of the geosynchrotron emission and the analysis of the superposed noise and radio frequency interferences. As the required receiver system was not commercially available, it was designed in the framework of this work and realised as system including the antenna, the receiver electronics and suitable data acquisition equipment. In this concept considerations for a large scale radio detector array have already been taken into account, like low power consumption to enable solar power supply and cost effectiveness. The result is a calibrated, multi-channel, digital wideband receiver for the complete range from 40 MHz to 80 MHz. Its inherent noise and RFI suppression essentially results from the antenna directional characteristic and frequency selectivity and allows effective radio observation of cosmic ray air showers also in populated environment. Several units of this receiver station have been deployed

  18. Performance of a First-Level Muon Trigger with High Momentum Resolution Based on the ATLAS MDT Chambers for HL-LHC

    CERN Document Server

    Gadow, P.; Kortner, S.; Kroha, H.; Müller, F.; Richter, R.

    2016-01-01

    Highly selective first-level triggers are essential to exploit the full physics potential of the ATLAS experiment at High-Luminosity LHC (HL-LHC). The concept for a new muon trigger stage using the precision monitored drift tube (MDT) chambers to significantly improve the selectivity of the first-level muon trigger is presented. It is based on fast track reconstruction in all three layers of the existing MDT chambers, made possible by an extension of the first-level trigger latency to six microseconds and a new MDT read-out electronics required for the higher overall trigger rates at the HL-LHC. Data from $pp$-collisions at $\\sqrt{s} = 8\\,\\mathrm{TeV}$ is used to study the minimal muon transverse momentum resolution that can be obtained using the MDT precision chambers, and to estimate the resolution and efficiency of the MDT-based trigger. A resolution of better than $4.1\\%$ is found in all sectors under study. With this resolution, a first-level trigger with a threshold of $18\\,\\mathrm{GeV}$ becomes fully e...

  19. Custom high-reliability radiation-hard CMOS-LSI circuit design

    International Nuclear Information System (INIS)

    Barnard, W.J.

    1981-01-01

    Sandia has developed a custom CMOS-LSI design capability to provide high reliability radiation-hardened circuits. This capability relies on (1) proven design practices to enhance reliability, (2) use of well characterized cells and logic modules, (3) computer-aided design tools to reduce design time and errors and to standardize design definition, and (4) close working relationships with the system designer and technology fabrication personnel. Trade-offs are made during the design between circuit complexity/performance and technology/producibility for high reliability and radiation-hardened designs to result. Sandia has developed and is maintaining a radiation-hardened bulk CMOS technology fabrication line for production of prototype and small production volume parts

  20. Design and testing of the high speed signal densely populated ATLAS calorimeter trigger board dedicate to jet identification

    CERN Document Server

    Vieira De Souza, Julio; The ATLAS collaboration

    2018-01-01

    The ATLAS experiment has planned a major upgrade in view of the enhanced luminosity of the beam delivered by the Large Hadron Collider (LHC) in 2021. As part of this, the trigger at Level-1 based on calorimeter data will be upgraded to exploit fine-granularity readout using a new system of Feature Extractors (three in total), which each uses different physics objects for the trigger selection. The contribution focusses on the jet Feature EXtractor (jFEX) prototype. Up to a data volume of 2 TB/s has to be processed to provide jet identification (including large area jets) and measurements of global variables within few hundred nanoseconds latency budget. Such requirements translate into the use of large Field Programmable Gate Array (FPGA) with the largest number of Multi Gigabit Transceivers (MGTs) available on the market. The jFEX board prototype hosts four large FPGAs from the Xilinx Ultrascale family with 120 MGTs each, connected to 24 opto-electrical devices, resulting in a densely populated high speed si...

  1. Reliability-based design optimization via high order response surface method

    International Nuclear Information System (INIS)

    Li, Hong Shuang

    2013-01-01

    To reduce the computational effort of reliability-based design optimization (RBDO), the response surface method (RSM) has been widely used to evaluate reliability constraints. We propose an efficient methodology for solving RBDO problems based on an improved high order response surface method (HORSM) that takes advantage of an efficient sampling method, Hermite polynomials and uncertainty contribution concept to construct a high order response surface function with cross terms for reliability analysis. The sampling method generates supporting points from Gauss-Hermite quadrature points, which can be used to approximate response surface function without cross terms, to identify the highest order of each random variable and to determine the significant variables connected with point estimate method. The cross terms between two significant random variables are added to the response surface function to improve the approximation accuracy. Integrating the nested strategy, the improved HORSM is explored in solving RBDO problems. Additionally, a sampling based reliability sensitivity analysis method is employed to reduce the computational effort further when design variables are distributional parameters of input random variables. The proposed methodology is applied on two test problems to validate its accuracy and efficiency. The proposed methodology is more efficient than first order reliability method based RBDO and Monte Carlo simulation based RBDO, and enables the use of RBDO as a practical design tool.

  2. High-mass star formation possibly triggered by cloud-cloud collision in the H II region RCW 34

    Science.gov (United States)

    Hayashi, Katsuhiro; Sano, Hidetoshi; Enokiya, Rei; Torii, Kazufumi; Hattori, Yusuke; Kohno, Mikito; Fujita, Shinji; Nishimura, Atsushi; Ohama, Akio; Yamamoto, Hiroaki; Tachihara, Kengo; Hasegawa, Yutaka; Kimura, Kimihiro; Ogawa, Hideo; Fukui, Yasuo

    2018-05-01

    We report on the possibility that the high-mass star located in the H II region RCW 34 was formed by a triggering induced by a collision of molecular clouds. Molecular gas distributions of the 12CO and 13CO J = 2-1 and 12CO J = 3-2 lines in the direction of RCW 34 were measured using the NANTEN2 and ASTE telescopes. We found two clouds with velocity ranges of 0-10 km s-1 and 10-14 km s-1. Whereas the former cloud is as massive as ˜1.4 × 104 M⊙ and has a morphology similar to the ring-like structure observed in the infrared wavelengths, the latter cloud, with a mass of ˜600 M⊙, which has not been recognized by previous observations, is distributed to just cover the bubble enclosed by the other cloud. The high-mass star with a spectral type of O8.5V is located near the boundary of the two clouds. The line intensity ratio of 12CO J = 3-2/J = 2-1 yields high values (≳1.0), suggesting that these clouds are associated with the massive star. We also confirm that the obtained position-velocity diagram shows a similar distribution to that derived by a numerical simulation of the supersonic collision of two clouds. Using the relative velocity between the two clouds (˜5 km s-1), the collisional time scale is estimated to be ˜0.2 Myr with the assumption of a distance of 2.5 kpc. These results suggest that the high-mass star in RCW 34 was formed rapidly within a time scale of ˜0.2 Myr via a triggering of a cloud-cloud collision.

  3. A high-resolution TDC-based board for a fully digital trigger and data acquisition system in the NA62 experiment at CERN

    CERN Document Server

    Pedreschi, Elena; Angelucci, Bruno; Avanzini, Carlo; Galeotti, Stefano; Lamanna, Gianluca; Magazzù, Guido; Pinzino, Jacopo; Piandani, Roberto; Sozzi, Marco; Spinella, Franco; Venditti, Stefano

    2015-01-01

    A Time to Digital Converter (TDC) based system, to be used for most sub-detectors in the high-flux rare-decay experiment NA62 at CERN SPS, was built as part of the NA62 fully digital Trigger and Data AcQuisition system (TDAQ), in which the TDC Board (TDCB) and a general-purpose motherboard (TEL62) will play a fundamental role. While TDCBs, housing four High Performance Time to Digital Converters (HPTDC), measure hit times from sub-detectors, the motherboard processes and stores them in a buffer, produces trigger primitives from different detectors and extracts only data related to the lowest trigger level decision, once this is taken on the basis of the trigger primitives themselves. The features of the TDCB board developed by the Pisa NA62 group are extensively discussed and performance data is presented in order to show its compliance with the experiment requirements.

  4. Assessing high reliability practices in wildland fire management: an exploration and benchmarking of organizational culture

    Science.gov (United States)

    Anne E. Black; Brooke Baldauf. McBride

    2013-01-01

    In an effort to improve organizational outcomes, including safety, in wildland fire management, researchers and practitioners have turned to a domain of research on organizational performance known as High Reliability Organizing (HRO). The HRO paradigm emerged in the late 1980s in an effort to identify commonalities among organizations that function under hazardous...

  5. Utilizing leadership to achieve high reliability in the delivery of perinatal care

    Directory of Open Access Journals (Sweden)

    Parrotta C

    2012-11-01

    Full Text Available Carmen Parrotta,1 William Riley,1 Les Meredith21School of Public Health, University of Minnesota, Minneapolis, MN, 2Premier Insurance Management Services Inc, Charlotte, NC, USAAbstract: Highly reliable care requires standardization of clinical practices and is a prerequisite for patient safety. However, standardization in complex hospital settings is extremely difficult to attain and health care leaders are challenged to create care delivery processes that ensure patient safety. Moreover, once high reliability is achieved in a hospital unit, it must be maintained to avoid process deterioration. This case study examines an intervention to implement care bundles (a collection of evidence-based practices in four hospitals to achieve standardized care in perinatal units. The results show different patterns in the rate and magnitude of change within the hospitals to achieve high reliability. The study is part of a larger nationwide study of 16 hospitals to improve perinatal safety. Based on the findings, we discuss the role of leadership for implementing and sustaining high reliability to ensure freedom from unintended injury.Keywords: care bundles, evidence-based practice, standardized care, process improvement

  6. System principles, mathematical models and methods to ensure high reliability of safety systems

    Science.gov (United States)

    Zaslavskyi, V.

    2017-04-01

    Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.

  7. Accelerated life testing and reliability of high K multilayer ceramic capacitors

    Science.gov (United States)

    Minford, W. J.

    1981-01-01

    The reliability of one lot of high K multilayer ceramic capacitors was evaluated using accelerated life testing. The degradation in insulation resistance was characterized as a function of voltage and temperature. The times to failure at a voltage-temperature stress conformed to a lognormal distribution with a standard deviation approximately 0.5.

  8. To the problem of reliability of high-voltage accelerators for industrial purposes

    International Nuclear Information System (INIS)

    Al'bertinskij, B.I.; Svin'in, M.P.; Tsepakin, S.G.

    1979-01-01

    Statistical data characterizing the reliability of ELECTRON and AVRORA-2 type accelerators are presented. Used as a reliability index was the mean time to failure of the main accelerator units. The analysis of accelerator failures allowed a number of conclusions to be drawn. The high failure rate level is connected with inadequate training of the servicing personnel and a natural period of equipment adjustment. The mathematical analysis of the failure rate showed that the main responsibility for insufficient high reliability rests with selenium diodes which are employed in the high voltage power supply. Substitution of selenium diodes by silicon ones increases time between failures. It is shown that accumulation and processing of operational statistical data will permit more accurate prediction of the reliability of produced high-voltage accelerators, make it possible to cope with the problems of planning optimal, in time, preventive inspections and repair, and to select optimal safety factors and test procedures n time, preventive inspections and repair, and to select optimal safety factors and test procedures n time, prevent

  9. Inter- and intrarater reliability of the Chicago Classification in pediatric high-resolution esophageal manometry recordings

    NARCIS (Netherlands)

    Singendonk, M. M. J.; Smits, M. J.; Heijting, I. E.; van Wijk, M. P.; Nurko, S.; Rosen, R.; Weijenborg, P. W.; Abu-Assi, R.; Hoekman, D. R.; Kuizenga-Wessel, S.; Seiboth, G.; Benninga, M. A.; Omari, T. I.; Kritas, S.

    2015-01-01

    The Chicago Classification (CC) facilitates interpretation of high-resolution manometry (HRM) recordings. Application of this adult based algorithm to the pediatric population is unknown. We therefore assessed intra and interrater reliability of software-based CC diagnosis in a pediatric cohort.

  10. Standard high-reliability integrated circuit logic packaging. [for deep space tracking stations

    Science.gov (United States)

    Slaughter, D. W.

    1977-01-01

    A family of standard, high-reliability hardware used for packaging digital integrated circuits is described. The design transition from early prototypes to production hardware is covered and future plans are discussed. Interconnections techniques are described as well as connectors and related hardware available at both the microcircuit packaging and main-frame level. General applications information is also provided.

  11. Technology Improvement for the High Reliability LM-2F Launch Vehicle

    Institute of Scientific and Technical Information of China (English)

    QIN Tong; RONG Yi; ZHENG Liwei; ZHANG Zhi

    2017-01-01

    The Long March 2F (LM-2F) launch vehicle,the only launch vehicle designed for manned space flight in China,successfully launched the Tiangong 2 space laboratory and the Shenzhou ll manned spaceship into orbits in 2016 respectively.In this study,it introduces the technological improvements for enhancing the reliability of the LM-2F launch vehicle in the aspects of general technology,control system,manufacture and ground support system.The LM2F launch vehicle will continue to provide more contributions to the Chinese Space Station Project with its high reliability and 100% success rate.

  12. Patient safety in anesthesia: learning from the culture of high-reliability organizations.

    Science.gov (United States)

    Wright, Suzanne M

    2015-03-01

    There has been an increased awareness of and interest in patient safety and improved outcomes, as well as a growing body of evidence substantiating medical error as a leading cause of death and injury in the United States. According to The Joint Commission, US hospitals demonstrate improvements in health care quality and patient safety. Although this progress is encouraging, much room for improvement remains. High-reliability organizations, industries that deliver reliable performances in the face of complex working environments, can serve as models of safety for our health care system until plausible explanations for patient harm are better understood. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Designing high availability systems DFSS and classical reliability techniques with practical real life examples

    CERN Document Server

    Taylor, Zachary

    2014-01-01

    A practical, step-by-step guide to designing world-class, high availability systems using both classical and DFSS reliability techniques Whether designing telecom, aerospace, automotive, medical, financial, or public safety systems, every engineer aims for the utmost reliability and availability in the systems he, or she, designs. But between the dream of world-class performance and reality falls the shadow of complexities that can bedevil even the most rigorous design process. While there are an array of robust predictive engineering tools, there has been no single-source guide to understan

  14. A criterion of the performance of thermometric systems of high metrological reliability

    International Nuclear Information System (INIS)

    Sal'nikov, N.L.; Filimonov, E.V.

    1995-01-01

    Monitoring temperature regimes is an important part of ensuring the operational safety of a nuclear power plant. Therefore, high standards are imposed upon the reliability of the primary information on the heat field of the object obtained from different sensors, and it is urgent to develop methods of evaluating the metrological reliability of these sensors. THe main sources of thermometric information at nuclear power plants are contact temperature sensors, the most widely used of these being thermoelectric converters (TEC) and thermal resistance converters (TRC)

  15. High-reliability, 4. pi. -scan, leakage-x-ray dosimeter

    Energy Technology Data Exchange (ETDEWEB)

    Kaneko, T; Iida, H; Yoshida, T; Sugimoto, H [Tokyo Shibaura Electric Co. Ltd., Kawasaki, Kanagawa (Japan). Tamagawa Works

    1978-04-01

    A world-wide movement is growing for the protection of living bodies against leakage radiations. In Japan, detailed regulations have been established for the enforcement of the law in regard to this problem. The substances of the measurement provided in the regulations are extremely diversified, much affecting the reliability and the economic efficiency of the equipment. Now a new 4..pi..-scan X-ray dosimeter with high reliability has been developed and proved to effect qualitative improvement of measurement as well as elevation of productivity.

  16. Instrument reliability for high-level nuclear-waste-repository applications

    International Nuclear Information System (INIS)

    Rogue, F.; Binnall, E.P.; Armantrout, G.A.

    1983-01-01

    Reliable instrumentation will be needed to evaluate the characteristics of proposed high-level nuclear-wasted-repository sites and to monitor the performance of selected sites during the operational period and into repository closure. A study has been done to assess the reliability of instruments used in Department of Energy (DOE) waste repository related experiments and in other similar geological applications. The study included experiences with geotechnical, hydrological, geochemical, environmental, and radiological instrumentation and associated data acquisition equipment. Though this paper includes some findings on the reliability of instruments in each of these categories, the emphasis is on experiences with geotechnical instrumentation in hostile repository-type environments. We review the failure modes, rates, and mechanisms, along with manufacturers modifications and design changes to enhance and improve instrument performance; and include recommendations on areas where further improvements are needed

  17. Semiconductor laser engineering, reliability and diagnostics a practical approach to high power and single mode devices

    CERN Document Server

    Epperlein, Peter W

    2013-01-01

    This reference book provides a fully integrated novel approach to the development of high-power, single-transverse mode, edge-emitting diode lasers by addressing the complementary topics of device engineering, reliability engineering and device diagnostics in the same book, and thus closes the gap in the current book literature. Diode laser fundamentals are discussed, followed by an elaborate discussion of problem-oriented design guidelines and techniques, and by a systematic treatment of the origins of laser degradation and a thorough exploration of the engineering means to enhance the optical strength of the laser. Stability criteria of critical laser characteristics and key laser robustness factors are discussed along with clear design considerations in the context of reliability engineering approaches and models, and typical programs for reliability tests and laser product qualifications. Novel, advanced diagnostic methods are reviewed to discuss, for the first time in detail in book literature, performa...

  18. High level issues in reliability quantification of safety-critical software

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2012-01-01

    For the purpose of developing a consensus method for the reliability assessment of safety-critical digital instrumentation and control systems in nuclear power plants, several high level issues in reliability assessment of the safety-critical software based on Bayesian belief network modeling and statistical testing are discussed. Related to the Bayesian belief network modeling, the relation between the assessment approach and the sources of evidence, the relation between qualitative evidence and quantitative evidence, how to consider qualitative evidence, and the cause-consequence relation are discussed. Related to the statistical testing, the need of the consideration of context-specific software failure probabilities and the inability to perform a huge number of tests in the real world are discussed. The discussions in this paper are expected to provide a common basis for future discussions on the reliability assessment of safety-critical software. (author)

  19. Triggering Artefacts

    DEFF Research Database (Denmark)

    Mogensen, Preben Holst; Robinson, Mike

    1995-01-01

    and adapting them to specific situations need not be ad hoc.Triggering artefacts are a way of systematically challenging both designers' preunderstandings and the conservatism of work practice. Experiences from the Great Belt tunnel and bridge project are used to illustrate howtriggering artefacts change...

  20. A mixed signal multi-chip module with high speed serial output links for the ATLAS Level-1 trigger

    CERN Document Server

    Pfeiffer, U

    2000-01-01

    We have built and tested a mixed signal multi-chip module (MCM) to be used in the Level-1 Pre-Processor system for the Calorimeter Trigger of the ATLAS experiment at CERN. The MCM performs high speed digital signal processing on four analogue input signals. Results are transmitted serially at a serial data rate of 800 MBd. Nine chips of different technologies are mounted on a four layer Cu substrate. ADC converters and serialiser chips are the major consumers of electrical power on the MCM, which amounts to 9 W for all dies. Special cut-out areas are used to dissipate heat directly to the copper substrate. In this paper we report on design criteria, chosen MCM technology for substrate and die mounting, experiences with the MCM operation and measurement results. (4 refs).

  1. TrigDB for improving the reliability of the epicenter locations by considering the neighborhood station's trigger and cutting out of outliers in operation of Earthquake Early Warning System.

    Science.gov (United States)

    Chi, H. C.; Park, J. H.; Lim, I. S.; Seong, Y. J.

    2016-12-01

    TrigDB is initially developed for the discrimination of teleseismic-origin false alarm in the case with unreasonably associated triggers producing mis-located epicenters. We have applied TrigDB to the current EEWS(Earthquake Early Warning System) from 2014. During the early stage of testing EEWS from 2011, we adapted ElarmS from US Berkeley BSL to Korean seismic network and applied more than 5 years. We found out that the real-time testing results of EEWS in Korea showed that all events inside of seismic network with bigger than magnitude 3.0 were well detected. However, two events located at sea area gave false location results with magnitude over 4.0 due to the long period and relatively high amplitude signals related to the teleseismic waves or regional deep sources. These teleseismic-relevant false events were caused by logical co-relation during association procedure and the corresponding geometric distribution of associated stations is crescent-shaped. Seismic stations are not deployed uniformly, so the expected bias ratio varies with evaluated epicentral location. This ratio is calculated in advance and stored into database, called as TrigDB, for the discrimination of teleseismic-origin false alarm. We upgraded this method, so called `TrigDB back filling', updating location with supplementary association of stations comparing triggered times between sandwiched stations which was not associated previously based on predefined criteria such as travel-time. And we have tested a module to reject outlier trigger times by setting a criteria comparing statistical values(Sigma) to the triggered times. The criteria of cutting off the outlier is slightly slow to work until the number of stations more than 8, however, the result of location is very much improved.

  2. Physics performances with the new ATLAS Level-1 Topological trigger in the LHC High-Luminosity Era

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00414333; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger system aim at reducing the 40 MHz protons collision event rate to a manageable event storage rate of 1 kHz, preserving events with valuable physics meaning. The Level-1 trigger is the first rate-reducing step in the ATLAS trigger system, with an output rate of 100 kHz and decision latency of less than 2.5 micro seconds. It is composed of the calorimeter trigger, muon trigger and central trigger processor. During the last upgrade, a new electronics element was introduced to Level-1: L1Topo, the Topological Processor System. It will make it possible to use detailed realtime information from the Level-1 calorimeter and muon triggers, processed in individual state of the art FPGA processors to determine angles between jets and/or leptons and calculate kinematic variables based on lists of selected/sorted objects. Over hundred VHDL algorithms are producing trigger outputs to be incorporated into the central trigger processor. Such information will be essential to improve background rejection and ...

  3. BTeV Trigger

    International Nuclear Information System (INIS)

    Gottschalk, Erik E.

    2006-01-01

    BTeV was designed to conduct precision studies of CP violation in BB-bar events using a forward-geometry detector in a hadron collider. The detector was optimized for high-rate detection of beauty and charm particles produced in collisions between protons and antiprotons. The trigger was designed to take advantage of the main difference between events with beauty and charm particles and more typical hadronic events-the presence of detached beauty and charm decay vertices. The first stage of the BTeV trigger was to receive data from a pixel vertex detector, reconstruct tracks and vertices for every beam crossing, reject at least 98% of beam crossings in which neither beauty nor charm particles were produced, and trigger on beauty events with high efficiency. An overview of the trigger design and its evolution to include commodity networking and computing components is presented

  4. Design and reliability, availability, maintainability, and safety analysis of a high availability quadruple vital computer system

    Institute of Scientific and Technical Information of China (English)

    Ping TAN; Wei-ting HE; Jia LIN; Hong-ming ZHAO; Jian CHU

    2011-01-01

    With the development of high-speed railways in China,more than 2000 high-speed trains will be put into use.Safety and efficiency of railway transportation is increasingly important.We have designed a high availability quadruple vital computer (HAQVC) system based on the analysis of the architecture of the traditional double 2-out-of-2 system and 2-out-of-3 system.The HAQVC system is a system with high availability and safety,with prominent characteristics such as fire-new internal architecture,high efficiency,reliable data interaction mechanism,and operation state change mechanism.The hardware of the vital CPU is based on ARM7 with the real-time embedded safe operation system (ES-OS).The Markov modeling method is designed to evaluate the reliability,availability,maintainability,and safety (RAMS) of the system.In this paper,we demonstrate that the HAQVC system is more reliable than the all voting triple modular redundancy (AVTMR) system and double 2-out-of-2 system.Thus,the design can be used for a specific application system,such as an airplane or high-speed railway system.

  5. A centre-triggered magnesium fuelled cathodic arc thruster uses sublimation to deliver a record high specific impulse

    Science.gov (United States)

    Neumann, Patrick R. C.; Bilek, Marcela; McKenzie, David R.

    2016-08-01

    The cathodic arc is a high current, low voltage discharge that operates in vacuum and provides a stream of highly ionised plasma from a solid conducting cathode. The high ion velocities, together with the high ionisation fraction and the quasineutrality of the exhaust stream, make the cathodic arc an attractive plasma source for spacecraft propulsion applications. The specific impulse of the cathodic arc thruster is substantially increased when the emission of neutral species is reduced. Here, we demonstrate a reduction of neutral emission by exploiting sublimation in cathode spots and enhanced ionisation of the plasma in short, high-current pulses. This, combined with the enhanced directionality due to the efficient erosion profiles created by centre-triggering, substantially increases the specific impulse. We present experimentally measured specific impulses and jet power efficiencies for titanium and magnesium fuels. Our Mg fuelled source provides the highest reported specific impulse for a gridless ion thruster and is competitive with all flight rated ion thrusters. We present a model based on cathode sublimation and melting at the cathodic arc spot explaining the outstanding performance of the Mg fuelled source. A further significant advantage of an Mg-fuelled thruster is the abundance of Mg in asteroidal material and in space junk, providing an opportunity for utilising these resources in space.

  6. Detector tests in a high magnetic field and muon spectrometer triggering studies on a small prototype for an LHC experiment

    CERN Document Server

    Ambrosi, G; Basile, M; Battiston, R; Bergsma, F; Castro, H; Cifarelli, Luisa; Cindolo, F; Contin, A; De Pasquale, S; Gálvez, J; Gentile, S; Giusti, P; Laurent, G; Levi, G; Lin, Q; Maccarrone, G D; Mattern, D; Nania, R; Rivera, F; Schioppa, M; Sharma, A; CERN. Geneva. Detector Research and Development Committee

    1990-01-01

    The "Large Area Devices" group of the LAA project is working on R&D for muon detection at a future super-collider. New detectors are under development and the design of a muon spectrometer for an LHC experiment is under study. Our present choice is for a compact, high field, air-core toroidal muon spectrometer. Good momentum resolution is achievable in this compact solution, with at least one plane of detection elements inside the high field region. A new detector, the Blade Chamber, making use of blades instead of wires, has been developed for the forward and backward regions of the spectrometer, where polar coordinate readings are desirable.The assembling of a CERN high energy beam line, equipped with high resolution drift chambers and a strong field magnet could give us the opportunity to test our chambers in a high magnetic field and to study the muon trigger capabilities of a spectrometer, like the one proposed, on a small prototype.

  7. High Stakes Trigger the Use of Multiple Memories to Enhance the Control of Attention

    Science.gov (United States)

    Reinhart, Robert M.G.; Woodman, Geoffrey F.

    2014-01-01

    We can more precisely tune attention to highly rewarding objects than other objects in our environment, but how our brains do this is unknown. After a few trials of searching for the same object, subjects' electrical brain activity indicated that they handed off the memory representations used to control attention from working memory to long-term memory. However, when a large reward was possible, the neural signature of working memory returned as subjects recruited working memory to supplement the cognitive control afforded by the representations accumulated in long-term memory. The amplitude of this neural signature of working memory predicted the magnitude of the subsequent behavioral reward-based attention effects across tasks and individuals, showing the ubiquity of this cognitive reaction to high-stakes situations. PMID:23448876

  8. Intelligent trigger by massively parallel processors for high energy physics experiments

    International Nuclear Information System (INIS)

    Rohrbach, F.; Vesztergombi, G.

    1992-01-01

    The CERN-MPPC collaboration concentrates its effort on the development of machines based on massive parallelism with thousands of integrated processing elements, arranged in a string. Seven applications are under detailed studies within the collaboration: three for LHC, one for SSC, two for fixed target high energy physics at CERN and one for HDTV. Preliminary results are presented. They show that the objectives should be reached with the use of the ASP architecture. (author)

  9. Design and Test of a Thermal Triggered Persistent Current System using High Temperature Superconducting Tapes

    Energy Technology Data Exchange (ETDEWEB)

    Park, Dong Keun [Department of Electrical and Electronic Engineering, Yonsei University, Shinchon-Dong 134, Seodaemun-Gu, Seoul 120-749 (Korea, Republic of); Kang, Hyoungku [Electro-Mechanical Research Institute, Hyundai Heavy Industries, Yongin (Korea, Republic of); Ahn, Min Cheol [Department of Electrical and Electronic Engineering, Yonsei University, Shinchon-Dong 134, Seodaemun-Gu, Seoul 120-749 (Korea, Republic of); Yang, Seong Eun [Department of Electrical and Electronic Engineering, Yonsei University, Shinchon-Dong 134, Seodaemun-Gu, Seoul 120-749 (Korea, Republic of); Yoon, Yong Soo [Department of Electrical Engineering, Ansan College of Technology, 671 Choji-Dong, Danwon-Gu, Ansan, 425-792 (Korea, Republic of); Lee, Sang Jin [Department of Electrical and Electronic Engineering, Yonsei University, Shinchon-Dong 134, Seodaemun-Gu, Seoul 120-749 (Korea, Republic of); Ko, Tae Kuk [Department of Electrical and Electronic Engineering, Yonsei University, Shinchon-Dong 134, Seodaemun-Gu, Seoul 120-749 (Korea, Republic of)

    2006-06-01

    A superconducting magnet which is operated in persistent current mode in SMES, NMR, MRI and MAGLEV has many advantages such as high uniformity of magnetic field and reduced thermal loss. A high temperature superconducting (HTS) persistent current switch (PCS) system was designed and tested in this research. The HTS PCS was optimally designed using two different HTS tapes, second generation coated conductor (CC) HTS tape and Bi-2223 HTS tape by the finite element method (FEM) in thermal quench characteristic view. The CC tape is more prospective applicable wire in these days for its high n value and critical current independency from external magnetic field than Bi-2223 tape. Also a prototype PCS system using Bi-2223 tape was manufactured and tested. The PCS system consists of a PCS part, a heater which induces the PCS to quench, and a superconducting magnet. The test was performed in various conditions of transport current. An initial current decay appeared when the superconducting magnet was energized in a PCS system was analyzed. This paper would be foundation of HTS PCS researches.

  10. The 2006 Pingtung Earthquake Doublet Triggered Seafloor Liquefaction: Revisiting the Evidence with Ultra-High-Resolution Seafloor Mapping

    Science.gov (United States)

    Su, C. C.; Chen, T. T.; Paull, C. K.; Gwiazda, R.; Chen, Y. H.; Lundsten, E. M.; Caress, D. W.; Hsu, H. H.; Liu, C. S.

    2017-12-01

    Since Heezen and Ewing's (1952) classic work on the 1929 Grand Banks earthquake, the damage of submarine cables have provided critical information on the nature of seafloor mass movements or sediment density flows. However, the understanding of the local conditions that lead to particular seafloor failures earthquakes trigger is still unclear. The Decemeber 26, 2006 Pingtung earthquake doublet which occurred offshore of Fangliao Township, southwestern Taiwan damaged 14 submarine cables between Gaoping slope to the northern terminus of the Manila Trench. Local fisherman reported disturbed waters at the head of the Fangliao submarine canyon, which lead to conjectures that eruptions of mud volcanoes which are common off the southwestern Taiwan. Geophysical survey were conducted to evaluate this area which revealed a series of faults, liquefied strata, pockmarks and acoustically transparent sediments with doming structures which may relate to the submarine groundwater discharge. Moreover, shipboard multi-beam bathymetric survey which was conducted at the east of Fangliao submarine canyon head shows over 10 km2 area with maximum depth around 40 m of seafloor subsidence after Pingtung earthquake. The north end of the subsidence is connected to the Fangliao submarine canyon where the first cable failed after Pingtung earthquake. The evidences suggests the earthquake triggered widespeard liquefaction and generated debris flows within Fangliao submarine canyon. In May 2017, an IONTU-MBARI Joint Survey Cruise (OR1-1163) was conducted on using MBARI Mapping AUV and miniROV to revisit the area where the cable damaged after Pingtung earthquake. From newly collected ultra-high-resolution (1-m lateral resolution) bathymetry data, the stair-stepped morphology is observed at the edge of canyon. The comet-shaped depressions are located along the main headwall of the seafloor failure. The new detailed bathymetry reveal details which suggest Fangliao submarine canyon head is

  11. High temperature triggers latent variation among individuals: oviposition rate and probability for outbreaks.

    Directory of Open Access Journals (Sweden)

    Christer Björkman

    2011-01-01

    Full Text Available It is anticipated that extreme population events, such as extinctions and outbreaks, will become more frequent as a consequence of climate change. To evaluate the increased probability of such events, it is crucial to understand the mechanisms involved. Variation between individuals in their response to climatic factors is an important consideration, especially if microevolution is expected to change the composition of populations.Here we present data of a willow leaf beetle species, showing high variation among individuals in oviposition rate at a high temperature (20 °C. It is particularly noteworthy that not all individuals responded to changes in temperature; individuals laying few eggs at 20 °C continued to do so when transferred to 12 °C, whereas individuals that laid many eggs at 20 °C reduced their oviposition and laid the same number of eggs as the others when transferred to 12 °C. When transferred back to 20 °C most individuals reverted to their original oviposition rate. Thus, high variation among individuals was only observed at the higher temperature. Using a simple population model and based on regional climate change scenarios we show that the probability of outbreaks increases if there is a realistic increase in the number of warm summers. The probability of outbreaks also increased with increasing heritability of the ability to respond to increased temperature.If climate becomes warmer and there is latent variation among individuals in their temperature response, the probability for outbreaks may increase. However, the likelihood for microevolution to play a role may be low. This conclusion is based on the fact that it has been difficult to show that microevolution affect the probability for extinctions. Our results highlight the urge for cautiousness when predicting the future concerning probabilities for extreme population events.

  12. The reliability of structural systems operating at high temperature: Replacing engineering judgement with operational experience

    International Nuclear Information System (INIS)

    Chevalier, M.J.; Smith, D.J.; Dean, D.W.

    2012-01-01

    Deterministic assessments are used to assess the integrity of structural systems operating at high temperature by providing a lower bound lifetime prediction, requiring considerable engineering judgement. However such a result may not satisfy the structural integrity assessment purpose if the results are overly conservative or conversely plant observations (such as failures) could undermine the assessment result if observed before the lower bound lifetime. This paper develops a reliability methodology for high temperature assessments and illustrates the impact and importance of managing the uncertainties within such an analysis. This is done by separating uncertainties into three classifications; aleatory uncertainty, quantifiable epistemic uncertainty and unquantifiable epistemic uncertainty. The result is a reliability model that can predict the behaviour of a structural system based upon plant observations, including failure and survival data. This can be used to reduce the over reliance upon engineering judgement which is prevalent in deterministic assessments. Highlights: ► Deterministic assessments are shown to be heavily reliant upon engineering judgment. ► Based upon the R5 procedure, a reliability model for a structural system is developed. ► Variables must be classified as either aleatory or epistemic to model their impact on reliability. ► Operation experience is then used to reduce reliance upon engineering judgment. ► This results in a model which can predict system behaviour and learn from operational experience.

  13. High reliable and Real-time Data Communication Network Technology for Nuclear Power Plant

    International Nuclear Information System (INIS)

    Jeong, K. I.; Lee, J. K.; Choi, Y. R.; Lee, J. C.; Choi, Y. S.; Cho, J. W.; Hong, S. B.; Jung, J. E.; Koo, I. S.

    2008-03-01

    As advanced digital Instrumentation and Control (I and C) system of NPP(Nuclear Power Plant) are being introduced to replace analog systems, a Data Communication Network(DCN) is becoming the important system for transmitting the data generated by I and C systems in NPP. In order to apply the DCNs to NPP I and C design, DCNs should conform to applicable acceptance criteria and meet the reliability and safety goals of the system. As response time is impacted by the selected protocol, network topology, network performance, and the network configuration of I and C system, DCNs should transmit a data within time constraints and response time required by I and C systems to satisfy response time requirements of I and C system. To meet these requirements, the DCNs of NPP I and C should be a high reliable and real-time system. With respect to high reliable and real-time system, several reports and techniques having influences upon the reliability and real-time requirements of DCNs are surveyed and analyzed

  14. Implementing eco friendly highly reliable upload feature using multi 3G service

    Science.gov (United States)

    Tanutama, Lukas; Wijaya, Rico

    2017-12-01

    The current trend of eco friendly Internet access is preferred. In this research the understanding of eco friendly is minimum power consumption. The devices that are selected have operationally low power consumption and normally have no power consumption as they are hibernating during idle state. To have the reliability a router of a router that has internal load balancing feature will provide the improvement of previous research on multi 3G services for broadband lines. Previous studies emphasized on accessing and downloading information files from Public Cloud residing Web Servers. The demand is not only for speed but high reliability of access as well. High reliability will mean mitigating both direct and indirect high cost due to repeated attempts of uploading and downloading the large files. Nomadic and mobile computer users need viable solution. Following solution for downloading information has been proposed and tested. The solution is promising. The result is now extended to providing reliable access line by means of redundancy and automatic reconfiguration for uploading and downloading large information files to a Web Server in the Cloud. The technique is taking advantage of internal load balancing feature to provision a redundant line acting as a backup line. A router that has the ability to provide load balancing to several WAN lines is chosen. The WAN lines are constructed using multiple 3G lines. The router supports the accessing Internet with more than one 3G access line which increases the reliability and availability of the Internet access as the second line immediately takes over if the first line is disturbed.

  15. High fat diet triggers cell cycle arrest and excessive apoptosis of granulosa cells during the follicular development

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Yanqing; Zhang, Zhenghong; Liao, Xinghui; Wang, Zhengchao, E-mail: zcwang@fjnu.edu.cn

    2015-10-23

    The regulatory mechanism of granulosa cells (GCs) proliferation during the follicular development is complicated and multifactorial, which is essential for the oocyte growth and normal ovarian functions. To investigate the role of high fat diet (HFD) on the proliferation of GCs, 4-week old female mice were fed with HFD or normal control diet (NC) for 15 weeks or 20 weeks and then detected the expression level of some regulatory molecules of cell cycle and apoptosis. The abnormal ovarian morphology was observed at 20 weeks. Further mechanistic studies indicated that HFD induced-obesity caused elevated apoptotic levels in GCs of the ovaries in a time-dependent manner. Moreover, cell cycle progress was also impacted after HFD fed. The cell cycle inhibitors, p27{sup Kip1} and p21{sup Cip1}, were significantly induced in the ovaries from the mice in HFD group when compared with that in the ovaries from the mice in NC group. Subsequently, the expression levels of Cyclin D1, D3 and CDK4 were also significantly influenced in the ovaries from the mice fed with HFD in a time-dependent manner. The present results suggested that HFD induced-obesity may trigger cell cycle arrest and excessive apoptosis of GCs, causing the abnormal follicular development and ovarian function failure. - Highlights: • HFD induced-obesity leads to abnormal ovarian morphology. • HFD induced-obesity triggers excessive apoptosis in the ovary. • HFD induced-obesity up-regulates cell cycle inhibitors p21{sup Cip1} and p27{sup Kip1} in the ovary. • HFD induced-obesity causes cell cycle arrest in the ovary.

  16. Light-Triggered CO2 Breathing Foam via Nonsurfactant High Internal Phase Emulsion.

    Science.gov (United States)

    Zhang, Shiming; Wang, Dingguan; Pan, Qianhao; Gui, Qinyuan; Liao, Shenglong; Wang, Yapei

    2017-10-04

    Solid materials for CO 2 capture and storage have attracted enormous attention for gaseous separation, environmental protection, and climate governance. However, their preparation and recovery meet the problems of high energy and financial cost. Herein, a controllable CO 2 capture and storage process is accomplished in an emulsion-templated polymer foam, in which CO 2 is breathed-in under dark and breathed-out under light illumination. Such a process is likely to become a relay of natural CO 2 capture by plants that on the contrary breathe out CO 2 at night. Recyclable CO 2 capture at room temperature and release under light irradiation guarantee its convenient and cost-effective regeneration in industry. Furthermore, CO 2 mixed with CH 4 is successfully separated through this reversible breathing in and out system, which offers great promise for CO 2 enrichment and practical methane purification.

  17. High doses of the histone deacetylase inhibitor sodium butyrate trigger a stress-like response.

    Science.gov (United States)

    Gagliano, Humberto; Delgado-Morales, Raul; Sanz-Garcia, Ancor; Armario, Antonio

    2014-04-01

    The hypothalamic-pituitary-adrenal (HPA) axis is activated by a wide range of stimuli, including drugs. Here we report that in male rats, a dose of sodium butyrate (NaBu) that is typically used to inhibit histone deacetylation (1200 mg/kg) increased the peripheral levels of HPA hormones and glucose. In a further experiment, we compared the effects of two different doses of NaBu (200 and 1200 mg/kg) and equimolar saline solutions on peripheral neuroendocrine markers and brain c-Fos expression to demonstrate a specific stress-like effect of NaBu that is not related to hypertonicity and to localise putatively involved brain areas. Only the high dose of NaBu increased the plasma levels of stress markers. The equimolar (hypertonic) saline solution also activated the HPA axis and the c-Fos expression in the paraventricular nucleus of the hypothalamus (PVN), a key area for the control of the HPA axis, but the effects were of a lower magnitude than those of NaBu. Regarding other brain areas, group differences in c-Fos expression were not observed in the medial prefrontal cortex or the medial amygdala, but they were observed in the central amygdala and the lateral ventral septum. However, only the latter area of the NaBu group showed enhanced c-Fos expression that was significantly higher than that after hypertonic saline. The present data indicate that high doses of NaBu appear to act as a pharmacological stressor, and this fact should be taken into account when using this drug to study the role of epigenetic processes in learning and emotional behaviour. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. ELLERMAN BOMBS AT HIGH RESOLUTION. II. TRIGGERING, VISIBILITY, AND EFFECT ON UPPER ATMOSPHERE

    Energy Technology Data Exchange (ETDEWEB)

    Vissers, Gregal J. M.; Rouppe van der Voort, Luc H. M.; Rutten, Robert J., E-mail: g.j.m.vissers@astro.uio.no [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029 Blindern, NO-0315 Oslo (Norway)

    2013-09-01

    We use high-resolution imaging spectroscopy with the Swedish 1-m Solar Telescope (SST) to study the transient brightenings of the wings of the Balmer H{alpha} line in emerging active regions that are called Ellerman bombs. Simultaneous sampling of Ca II 8542 A with the SST confirms that most Ellerman bombs also occur in the wings of this line, but with markedly different morphology. Simultaneous images from the Solar Dynamics Observatory (SDO) show that Ellerman bombs are also detectable in the photospheric 1700 A continuum, again with differing morphology. They are also observable in 1600 A SDO images, but with much contamination from C IV emission in transition-region features. Simultaneous SST spectropolarimetry in Fe I 6301 A shows that Ellerman bombs occur at sites of strong-field magnetic flux cancellation between small bipolar strong-field patches that rapidly move together over the solar surface. Simultaneous SDO images in He II 304 A, Fe IX 171 A, and Fe XIV 211 A show no clear effect of the Ellerman bombs on the overlying transition region and corona. These results strengthen our earlier suggestion, based on H{alpha} morphology alone, that the Ellerman bomb phenomenon is a purely photospheric reconnection phenomenon.

  19. High hydrostatic pressure leads to free radicals accumulation in yeast cells triggering oxidative stress.

    Science.gov (United States)

    Bravim, Fernanda; Mota, Mainã M; Fernandes, A Alberto R; Fernandes, Patricia M B

    2016-08-01

    Saccharomyces cerevisiae is a unicellular organism that during the fermentative process is exposed to a variable environment; hence, resistance to multiple stress conditions is a desirable trait. The stress caused by high hydrostatic pressure (HHP) in S. cerevisiae resembles the injuries generated by other industrial stresses. In this study, it was confirmed that gene expression pattern in response to HHP displays an oxidative stress response profile which is expanded upon hydrostatic pressure release. Actually, reactive oxygen species (ROS) concentration level increased in yeast cells exposed to HHP treatment and an incubation period at room pressure led to a decrease in intracellular ROS concentration. On the other hand, ethylic, thermic and osmotic stresses did not result in any ROS accumulation in yeast cells. Microarray analysis revealed an upregulation of genes related to methionine metabolism, appearing to be a specific cellular response to HHP, and not related to other stresses, such as heat and osmotic stresses. Next, we investigated whether enhanced oxidative stress tolerance leads to enhanced tolerance to HHP stress. Overexpression of STF2 is known to enhance tolerance to oxidative stress and we show that it also leads to enhanced tolerance to HHP stress. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Transferring Aviation Practices into Clinical Medicine for the Promotion of High Reliability.

    Science.gov (United States)

    Powell-Dunford, Nicole; McPherson, Mark K; Pina, Joseph S; Gaydos, Steven J

    2017-05-01

    Aviation is a classic example of a high reliability organization (HRO)-an organization in which catastrophic events are expected to occur without control measures. As health care systems transition toward high reliability, aviation practices are increasingly transferred for clinical implementation. A PubMed search using the terms aviation, crew resource management, and patient safety was undertaken. Manuscripts authored by physician pilots and accident investigation regulations were analyzed. Subject matter experts involved in adoption of aviation practices into the medical field were interviewed. A PubMed search yielded 621 results with 22 relevant for inclusion. Improved clinical outcomes were noted in five research trials in which aviation practices were adopted, particularly with regard to checklist usage and crew resource-management training. Effectiveness of interventions was influenced by intensity of application, leadership involvement, and provision of staff training. The usefulness of incorporating mishap investigation techniques has not been established. Whereas aviation accident investigation is highly standardized, the investigation of medical error is characterized by variation. The adoption of aviation practices into clinical medicine facilitates an evolution toward high reliability. Evidence for the efficacy of the checklist and crew resource-management training is robust. Transference of aviation accident investigation practices is preliminary. A standardized, independent investigation process could facilitate the development of a safety culture commensurate with that achieved in the aviation industry.Powell-Dunford N, McPherson MK, Pina JS, Gaydos SJ. Transferring aviation practices into clinical medicine for the promotion of high reliability. Aerosp Med Hum Perform. 2017; 88(5):487-491.

  1. Design of piezoelectric transducer layer with electromagnetic shielding and high connection reliability

    International Nuclear Information System (INIS)

    Qiu, Lei; Yuan, Shenfang; Shi, Xiaoling; Huang, Tianxiang

    2012-01-01

    Piezoelectric transducer (PZT) and Lamb wave based structural health monitoring (SHM) method have been widely studied for on-line SHM of high-performance structures. To monitor large-scale structures, a dense PZTs array is required. In order to improve the placement efficiency and reduce the wire burden of the PZTs array, the concept of the piezoelectric transducers layer (PSL) was proposed. The PSL consists of PZTs, a flexible interlayer with printed wires and signal input/output interface. For on-line SHM on real aircraft structures, there are two main issues on electromagnetic interference and connection reliability of the PSL. To address the issues, an electromagnetic shielding design method of the PSL to reduce spatial electromagnetic noise and crosstalk is proposed and a combined welding–cementation process based connection reliability design method is proposed to enhance the connection reliability between the PZTs and the flexible interlayer. Two experiments on electromagnetic interference suppression are performed to validate the shielding design of the PSL. The experimental results show that the amplitudes of the spatial electromagnetic noise and crosstalk output from the shielded PSL developed by this paper are − 15 dB and − 25 dB lower than those of the ordinary PSL, respectively. Other two experiments on temperature durability ( − 55 °C–80 °C ) and strength durability (160–1600με, one million load cycles) are applied to the PSL to validate the connection reliability. The low repeatability errors (less than 3% and less than 5%, respectively) indicate that the developed PSL is of high connection reliability and long fatigue life. (paper)

  2. Design of piezoelectric transducer layer with electromagnetic shielding and high connection reliability

    Science.gov (United States)

    Qiu, Lei; Yuan, Shenfang; Shi, Xiaoling; Huang, Tianxiang

    2012-07-01

    Piezoelectric transducer (PZT) and Lamb wave based structural health monitoring (SHM) method have been widely studied for on-line SHM of high-performance structures. To monitor large-scale structures, a dense PZTs array is required. In order to improve the placement efficiency and reduce the wire burden of the PZTs array, the concept of the piezoelectric transducers layer (PSL) was proposed. The PSL consists of PZTs, a flexible interlayer with printed wires and signal input/output interface. For on-line SHM on real aircraft structures, there are two main issues on electromagnetic interference and connection reliability of the PSL. To address the issues, an electromagnetic shielding design method of the PSL to reduce spatial electromagnetic noise and crosstalk is proposed and a combined welding-cementation process based connection reliability design method is proposed to enhance the connection reliability between the PZTs and the flexible interlayer. Two experiments on electromagnetic interference suppression are performed to validate the shielding design of the PSL. The experimental results show that the amplitudes of the spatial electromagnetic noise and crosstalk output from the shielded PSL developed by this paper are - 15 dB and - 25 dB lower than those of the ordinary PSL, respectively. Other two experiments on temperature durability ( - 55 °C-80 °C ) and strength durability (160-1600μɛ, one million load cycles) are applied to the PSL to validate the connection reliability. The low repeatability errors (less than 3% and less than 5%, respectively) indicate that the developed PSL is of high connection reliability and long fatigue life.

  3. Heating-Rate-Triggered Carbon-Nanotube-based 3-Dimensional Conducting Networks for a Highly Sensitive Noncontact Sensing Device

    KAUST Repository

    Tai, Yanlong

    2016-01-28

    Recently, flexible and transparent conductive films (TCFs) are drawing more attention for their central role in future applications of flexible electronics. Here, we report the controllable fabrication of TCFs for moisture-sensing applications based on heating-rate-triggered, 3-dimensional porous conducting networks through drop casting lithography of single-walled carbon nanotube (SWCNT)/poly(3,4-ethylenedioxythiophene)-polystyrene sulfonate (PEDOT:PSS) ink. How ink formula and baking conditions influence the self-assembled microstructure of the TCFs is discussed. The sensor presents high-performance properties, including a reasonable sheet resistance (2.1 kohm/sq), a high visible-range transmittance (>69%, PET = 90%), and good stability when subjected to cyclic loading (>1000 cycles, better than indium tin oxide film) during processing, when formulation parameters are well optimized (weight ratio of SWCNT to PEDOT:PSS: 1:0.5, SWCNT concentration: 0.3 mg/ml, and heating rate: 36 °C/minute). Moreover, the benefits of these kinds of TCFs were verified through a fully transparent, highly sensitive, rapid response, noncontact moisture-sensing device (5 × 5 sensing pixels).

  4. Surviving the Lead Reliability Engineer Role in High Unit Value Projects

    Science.gov (United States)

    Perez, Reinaldo J.

    2011-01-01

    A project with a very high unit value within a company is defined as a project where a) the project constitutes one of a kind (or two-of-a-kind) national asset type of project, b) very large cost, and c) a mission failure would be a very public event that will hurt the company's image. The Lead Reliability engineer in a high visibility project is by default involved in all phases of the project, from conceptual design to manufacture and testing. This paper explores a series of lessons learned, over a period of ten years of practical industrial experience by a Lead Reliability Engineer. We expand on the concepts outlined by these lessons learned via examples. The lessons learned are applicable to all industries.

  5. Reliability and validity of academic motivation scale for sports high school students’

    Directory of Open Access Journals (Sweden)

    Haslofça Fehime

    2016-01-01

    Full Text Available This study was designed to test validity and reliability of Academic Motivation Scale (AMS for sports high school students. The research conducted with 357 volunteered girls (n=117 and boys (n=240. Confirmatory factor analysis showed that Chi square (χ2, degrees of freedom (df and χ2/df ratio were 1102.90, 341 and 3.234, respectively. Goodness of Fit Index, Comparative Fit Index, Non-normed Fit Index and Incremental Fit Index were between 0.92-0.95. Additionally, Adjusted Goodness of Fit Index, An Average Errors Square Root and Root Mean Square Error of Approximation were 0.88, 0.070 and 0.079, respectively. Subscale reliability coefficients were between 0.77 and 0.86. Test-retest correlations of AMS were found between 0.79 and 0.91. Results showed that scale was suitable for determination of sports high school students’ academicals motivation levels.

  6. A single lithium-ion battery protection circuit with high reliability and low power consumption

    International Nuclear Information System (INIS)

    Jiang Jinguang; Li Sen

    2014-01-01

    A single lithium-ion battery protection circuit with high reliability and low power consumption is proposed. The protection circuit has high reliability because the voltage and current of the battery are controlled in a safe range. The protection circuit can immediately activate a protective function when the voltage and current of the battery are beyond the safe range. In order to reduce the circuit's power consumption, a sleep state control circuit is developed. Additionally, the output frequency of the ring oscillation can be adjusted continuously and precisely by the charging capacitors and the constant-current source. The proposed protection circuit is fabricated in a 0.5 μm mixed-signal CMOS process. The measured reference voltage is 1.19 V, the overvoltage is 4.2 V and the undervoltage is 2.2 V. The total power is about 9 μW. (semiconductor integrated circuits)

  7. Toward reliable and repeatable automated STEM-EDS metrology with high throughput

    Science.gov (United States)

    Zhong, Zhenxin; Donald, Jason; Dutrow, Gavin; Roller, Justin; Ugurlu, Ozan; Verheijen, Martin; Bidiuk, Oleksii

    2018-03-01

    New materials and designs in complex 3D architectures in logic and memory devices have raised complexity in S/TEM metrology. In this paper, we report about a newly developed, automated, scanning transmission electron microscopy (STEM) based, energy dispersive X-ray spectroscopy (STEM-EDS) metrology method that addresses these challenges. Different methodologies toward repeatable and efficient, automated STEM-EDS metrology with high throughput are presented: we introduce the best known auto-EDS acquisition and quantification methods for robust and reliable metrology and present how electron exposure dose impacts the EDS metrology reproducibility, either due to poor signalto-noise ratio (SNR) at low dose or due to sample modifications at high dose conditions. Finally, we discuss the limitations of the STEM-EDS metrology technique and propose strategies to optimize the process both in terms of throughput and metrology reliability.

  8. Highly-reliable operation of 638-nm broad stripe laser diode with high wall-plug efficiency for display applications

    Science.gov (United States)

    Yagi, Tetsuya; Shimada, Naoyuki; Nishida, Takehiro; Mitsuyama, Hiroshi; Miyashita, Motoharu

    2013-03-01

    Laser based displays, as pico to cinema laser projectors have gathered much attention because of wide gamut, low power consumption, and so on. Laser light sources for the displays are operated mainly in CW, and heat management is one of the big issues. Therefore, highly efficient operation is necessitated. Also the light sources for the displays are requested to be highly reliable. 638 nm broad stripe laser diode (LD) was newly developed for high efficiency and highly reliable operation. An AlGaInP/GaAs red LD suffers from low wall plug efficiency (WPE) due to electron overflow from an active layer to a p-cladding layer. Large optical confinement factor (Γ) design with AlInP cladding layers is adopted to improve the WPE. The design has a disadvantage for reliable operation because the large Γ causes high optical density and brings a catastrophic optical degradation (COD) at a front facet. To overcome the disadvantage, a window-mirror structure is also adopted in the LD. The LD shows WPE of 35% at 25°C, highest record in the world, and highly stable operation at 35°C, 550 mW up to 8,000 hours without any catastrophic optical degradation.

  9. Applying the High Reliability Health Care Maturity Model to Assess Hospital Performance: A VA Case Study.

    Science.gov (United States)

    Sullivan, Jennifer L; Rivard, Peter E; Shin, Marlena H; Rosen, Amy K

    2016-09-01

    The lack of a tool for categorizing and differentiating hospitals according to their high reliability organization (HRO)-related characteristics has hindered progress toward implementing and sustaining evidence-based HRO practices. Hospitals would benefit both from an understanding of the organizational characteristics that support HRO practices and from knowledge about the steps necessary to achieve HRO status to reduce the risk of harm and improve outcomes. The High Reliability Health Care Maturity (HRHCM) model, a model for health care organizations' achievement of high reliability with zero patient harm, incorporates three major domains critical for promoting HROs-Leadership, Safety Culture, and Robust Process Improvement ®. A study was conducted to examine the content validity of the HRHCM model and evaluate whether it can differentiate hospitals' maturity levels for each of the model's components. Staff perceptions of patient safety at six US Department of Veterans Affairs (VA) hospitals were examined to determine whether all 14 HRHCM components were present and to characterize each hospital's level of organizational maturity. Twelve of the 14 components from the HRHCM model were detected; two additional characteristics emerged that are present in the HRO literature but not represented in the model-teamwork culture and system-focused tools for learning and improvement. Each hospital's level of organizational maturity could be characterized for 9 of the 14 components. The findings suggest the HRHCM model has good content validity and that there is differentiation between hospitals on model components. Additional research is needed to understand how these components can be used to build the infrastructure necessary for reaching high reliability.

  10. Study on highly reliable digital communication technology of reactor nuclear measuring equipment

    International Nuclear Information System (INIS)

    Gu Pengfei; Huang Xiaojin

    2007-01-01

    To meet the need of highly reliable of reactor nuclear measuring equipment, in allusion to the idiographic request of nuclear measuring equipment, the actual technical development and the application in industrial field, we design a kind of redundancy communication net based on PROFIBUS, and a kind of communication interface module based on redundancy PROFIBUS communication, which link the nuclear measuring equipment and PROFIBUS communication net, and also lay a foundation for advanced research. (authors)

  11. Gearbox Reliability Collaborative Investigation of High-Speed-Shaft Bearing Loads

    Energy Technology Data Exchange (ETDEWEB)

    Keller, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Guo, Yi [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-06-01

    The loads and contact stresses in the bearings of the high speed shaft section of the Gearbox Reliability Collaborative gearbox are examined in this paper. The loads were measured though strain gauges installed on the bearing outer races during dynamometer testing of the gearbox. Loads and stresses were also predicted with a simple analytical model and higher-fidelity commercial models. The experimental data compared favorably to each model, and bearing stresses were below thresholds for contact fatigue and axial cracking.

  12. DJ-1 is a reliable serum biomarker for discriminating high-risk endometrial cancer.

    Science.gov (United States)

    Di Cello, Annalisa; Di Sanzo, Maddalena; Perrone, Francesca Marta; Santamaria, Gianluca; Rania, Erika; Angotti, Elvira; Venturella, Roberta; Mancuso, Serafina; Zullo, Fulvio; Cuda, Giovanni; Costanzo, Francesco

    2017-06-01

    New reliable approaches to stratify patients with endometrial cancer into risk categories are highly needed. We have recently demonstrated that DJ-1 is overexpressed in endometrial cancer, showing significantly higher levels both in serum and tissue of patients with high-risk endometrial cancer compared with low-risk endometrial cancer. In this experimental study, we further extended our observation, evaluating the role of DJ-1 as an accurate serum biomarker for high-risk endometrial cancer. A total of 101 endometrial cancer patients and 44 healthy subjects were prospectively recruited. DJ-1 serum levels were evaluated comparing cases and controls and, among endometrial cancer patients, between high- and low-risk patients. The results demonstrate that DJ-1 levels are significantly higher in cases versus controls and in high- versus low-risk patients. The receiver operating characteristic curve analysis shows that DJ-1 has a very good diagnostic accuracy in discriminating endometrial cancer patients versus controls and an excellent accuracy in distinguishing, among endometrial cancer patients, low- from high-risk cases. DJ-1 sensitivity and specificity are the highest when high- and low-risk patients are compared, reaching the value of 95% and 99%, respectively. Moreover, DJ-1 serum levels seem to be correlated with worsening of the endometrial cancer grade and histotype, making it a reliable tool in the preoperative decision-making process.

  13. Sintered tantalum carbide coatings on graphite substrates: Highly reliable protective coatings for bulk and epitaxial growth

    International Nuclear Information System (INIS)

    Nakamura, Daisuke; Suzumura, Akitoshi; Shigetoh, Keisuke

    2015-01-01

    Highly reliable low-cost protective coatings have been sought after for use in crucibles and susceptors for bulk and epitaxial film growth processes involving wide bandgap materials. Here, we propose a production technique for ultra-thick (50–200 μmt) tantalum carbide (TaC) protective coatings on graphite substrates, which consists of TaC slurry application and subsequent sintering processes, i.e., a wet ceramic process. Structural analysis of the sintered TaC layers indicated that they have a dense granular structure containing coarse grain with sizes of 10–50 μm. Furthermore, no cracks or pinholes penetrated through the layers, i.e., the TaC layers are highly reliable protective coatings. The analysis also indicated that no plastic deformation occurred during the production process, and the non-textured crystalline orientation of the TaC layers is the origin of their high reliability and durability. The TaC-coated graphite crucibles were tested in an aluminum nitride (AlN) sublimation growth process, which involves extremely corrosive conditions, and demonstrated their practical reliability and durability in the AlN growth process as a TaC-coated graphite. The application of the TaC-coated graphite materials to crucibles and susceptors for use in bulk AlN single crystal growth, bulk silicon carbide (SiC) single crystal growth, chemical vapor deposition of epitaxial SiC films, and metal-organic vapor phase epitaxy of group-III nitrides will lead to further improvements in crystal quality and reduced processing costs

  14. Performance and reliability of the Y-Balance TestTM in high school athletes.

    Science.gov (United States)

    Smith, Laura J; Creps, James R; Bean, Ryan; Rodda, Becky; Alsalaheen, Bara

    2017-11-07

    Lower extremity injuries account for 32.9% of the overall injuries in high school athletes. Previous research has suggested that asymmetry greater than 4cm using the Y-Balance TestTM Lower Quarter (YBT-LQ) in the anterior direction is predictive of non- contact injuries in adults and collegiate athletes. The prevalence of asymmetries or abnormal YBT-LQ performance is not well documented for adolescents. The primary purposes of this study are: 1) to characterize the prevalence of YBT-LQ asymmetries and performance in a cross-sectional sample of adolescents, 2) to examine possible differences in performance on the YBT-LQ between male and female adolescents, and 3) to describe the test-retest reliability of the YBT-LQ in a subsample of adolescents. Observational cross-sectional study. High-school athletes completed the YBT-LQ as main outcome measure. 51 male, 59 female high-school athletes participated in this study. Asymmetries greater than 4cm in the posteromedial (PM) reach direction were most prevalent for male (54.9%) and female (50.8%) participants. Females presented with slightly higher composite scores. Good reliability (ICC = 0.89) was found for the anterior (ANT) direction, and moderate reliability with 0.76 for posterolateral (PL) and 0.63 for PM directions. The MDC95 for the ANT direction was 6% and 12% for both the PL and PM directions. The YBT-LQ performance can be beneficial in assessing recovery in an injured extremity compared to the other limb. However, due to the large MDC95, noted in the PM and PL directions, the differences between sequential testing cannot be attributed to true change in balance unless they exceed the MDC95. In this study, 79% of the athletes presented with at least one asymmetry in YBT-LQ reach distances. Moderate reliability in the PL and PM directions warrants reexamination of the definition of asymmetry in these directions.

  15. Sintered tantalum carbide coatings on graphite substrates: Highly reliable protective coatings for bulk and epitaxial growth

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, Daisuke; Suzumura, Akitoshi; Shigetoh, Keisuke [Toyota Central R and D Labs., Inc., Nagakute, Aichi 480-1192 (Japan)

    2015-02-23

    Highly reliable low-cost protective coatings have been sought after for use in crucibles and susceptors for bulk and epitaxial film growth processes involving wide bandgap materials. Here, we propose a production technique for ultra-thick (50–200 μmt) tantalum carbide (TaC) protective coatings on graphite substrates, which consists of TaC slurry application and subsequent sintering processes, i.e., a wet ceramic process. Structural analysis of the sintered TaC layers indicated that they have a dense granular structure containing coarse grain with sizes of 10–50 μm. Furthermore, no cracks or pinholes penetrated through the layers, i.e., the TaC layers are highly reliable protective coatings. The analysis also indicated that no plastic deformation occurred during the production process, and the non-textured crystalline orientation of the TaC layers is the origin of their high reliability and durability. The TaC-coated graphite crucibles were tested in an aluminum nitride (AlN) sublimation growth process, which involves extremely corrosive conditions, and demonstrated their practical reliability and durability in the AlN growth process as a TaC-coated graphite. The application of the TaC-coated graphite materials to crucibles and susceptors for use in bulk AlN single crystal growth, bulk silicon carbide (SiC) single crystal growth, chemical vapor deposition of epitaxial SiC films, and metal-organic vapor phase epitaxy of group-III nitrides will lead to further improvements in crystal quality and reduced processing costs.

  16. Development of high speed and reliable data transmission system for industrial CT

    International Nuclear Information System (INIS)

    Gao Fuqiang; Dong Yanli; Liu Guohua

    2010-01-01

    In order to meet the requirements of large capacity,high speed and high reliability of data transmission for industrial CT, a data transmission system based on USB 2.0 was designed. In the process of data transmission, FPGA was the main controller, and USB 2.0 CY7C68013A worked in slave FIFO mode. The system sent the data got from data acquisition system to host computer for image reconstruction. The testing results show that the transmission rate can reach 33 MB/s and the precision is 100%. The system satisfies the requirements of data transmission for industrial CT. (authors)

  17. An FPGA based track finder for the L1 trigger of the CMS experiment at the High Luminosity LHC

    CERN Document Server

    Tomalin, Ian; Ball, Fionn Amhairghen; Balzer, Matthias Norbert; Boudoul, Gaelle; Brooke, James John; Caselle, Michele; Calligaris, Luigi; Cieri, Davide; Clement, Emyr John; Dutta, Suchandra; Hall, Geoffrey; Harder, Kristian; Hobson, Peter; Iles, Gregory Michiel; James, Thomas Owen; Manolopoulos, Konstantinos; Matsushita, Takashi; Morton, Alexander; Newbold, David; Paramesvaran, Sudarshan; Pesaresi, Mark Franco; Pozzobon, Nicola; Reid, Ivan; Rose, A. W; Sander, Oliver; Shepherd-Themistocleous, Claire; Shtipliyski, Antoni; Schuh, Thomas; Skinnari, Louise; Summers, Sioni Paris; Tapper, Alexander; Thea, Alessandro; Uchida, Kirika; Vichoudis, Paschalis; Viret, Sebastien; Weber, M; Aggleton, Robin Cameron

    2017-12-14

    A new tracking detector is under development for use by the CMS experiment at the High-Luminosity LHC (HL-LHC). A crucial requirement of this upgrade is to provide the ability to reconstruct all charged particle tracks with transverse momentum above 2-3 GeV within 4$\\mu$s so they can be used in the Level-1 trigger decision. A concept for an FPGA-based track finder using a fully time-multiplexed architecture is presented, where track candidates are reconstructed using a projective binning algorithm based on the Hough Transform, followed by a combinatorial Kalman Filter. A hardware demonstrator using MP7 processing boards has been assembled to prove the entire system functionality, from the output of the tracker readout boards to the reconstruction of tracks with fitted helix parameters. It successfully operates on one eighth of the tracker solid angle acceptance at a time, processing events taken at 40 MHz, each with up to 200 superimposed proton-proton interactions, whilst satisfying the latency requirement. ...

  18. Improvements of the ALICE high level trigger for LHC Run 2 to facilitate online reconstruction, QA, and calibration

    Energy Technology Data Exchange (ETDEWEB)

    Rohr, David [Frankfurt Institute for Advanced Studies, Frankfurt (Germany); Collaboration: ALICE-Collaboration

    2016-07-01

    ALICE is one of the four major experiments at the Large Hadron Collider (LHC) at CERN. Its main goal is the study of matter under extreme pressure and temperature as produced in heavy ion collisions at LHC. The ALICE High Level Trigger (HLT) is an online compute farm of around 200 nodes that performs a real time event reconstruction of the data delivered by the ALICE detectors. The HLT employs a fast FPGA based cluster finder algorithm as well as a GPU based track reconstruction algorithm and it is designed to process the maximum data rate expected from the ALICE detectors in real time. We present new features of the HLT for LHC Run 2 that started in 2015. A new fast standalone track reconstruction algorithm for the Inner Tracking System (ITS) enables the HLT to compute and report to LHC the luminous region of the interactions in real time. We employ a new dynamically reconfigurable histogram component that allows the visualization of characteristics of the online reconstruction using the full set of events measured by the detectors. This improves our monitoring and QA capabilities. During Run 2, we plan to deploy online calibration, starting with the calibration of the TPC (Time Projection Chamber) detector's drift time. First proof of concept tests were successfully performed using data-replay on our development cluster and during the heavy ion period at the end of 2015.

  19. Hadron correlation in jets on the near and away sides of high-pT triggers in heavy-ion collisions

    International Nuclear Information System (INIS)

    Hwa, Rudolph C.; Yang, C. B.

    2009-01-01

    The correlation between the trigger and associated particles in jets produced on near and away sides of high-p T triggers in heavy-ion collisions is studied. Hadronization of jets on both sides is treated by thermal-shower and shower-shower recombinations. The energy loss of semihard and hard partons traversing the nuclear medium is parametrized in a way that renders a good fit of the single-particle inclusive distributions at all centralities. The associated hadron distribution in the near-side jet can be determined showing weak dependence on system size because of trigger bias. The inverse slope increases with trigger momentum in agreement with data. The distribution of associated particles in the away-side jet is also studied with careful attention given to antitrigger bias that is due to the longer path length that the away-side jet recoiling against the trigger jet must propagate in the medium to reach the opposite side. Centrality dependence is taken into account after determining a realistic probability distribution of the dynamical path length of the parton trajectory within each class of centrality. For symmetric dijets with p T trig =p T assoc (away), it is shown that the per-trigger yield is dominated by tangential jets. For unequal p T trig , p T assoc (near) and p T assoc (away), the yields are calculated for various centralities, showing an intricate relationship among them. The near-side yield agrees with data both in centrality dependence and in p T assoc (near) distribution. The average parton momentum for the recoil jet is shown to be always larger than that of the trigger jet for fixed p T trig and centrality and for any measurable p T assoc (away). With the comprehensive treatment of dijet production described here, it is possible to answer many questions regarding the behavior of partons in the medium under conditions that can be specified on measurable hadron momenta.

  20. Studies on the reliability of high-field intra-operative MRI in brain glioma resection

    Directory of Open Access Journals (Sweden)

    Zhi-jun SONG

    2011-07-01

    Full Text Available Objective To evaluate the reliability of high-field intra-operative magnetic resonance imaging(iMRI in detecting the residual tumors during glioma resection.Method One hundred and thirty-one cases of brain glioma(69 males and 62 females,aged from 7 to 79 years with mean of 39.6 years hospitalized from Nov.2009 to Aug.2010 were involved in present study.All the patients were evaluated using magnetic resonance imaging(MRI before the operation.The tumors were resected under conventional navigation microscope,and the high-field iMRI was used for all the patients when the operators considered the tumor was satisfactorily resected,while the residual tumor was difficult to detect under the microscope,but resected after being revealed by high-field iMRI.Histopathological examination was performed.The patients without residual tumors recieved high-field MRI scan at day 4 or 5 after operation to evaluate the accuracy of high-field iMRI during operation.Results High quality intra-operative images were obtained by using high-field iMRI.Twenty-eight cases were excluded because their residual tumors were not resected due to their location too close to functional area.Combined with the results of intra-operative histopathological examination and post-operative MRI at the early recovery stage,the sensitivity of high-field iMRI in residual tumor diagnosis was 98.0%(49/50,the specificity was 94.3%(50/53,and the accuracy was 96.1%(99/103.Conclusion High-quality intra-operative imaging could be acquired by high-field iMRI,which maybe used as a safe and reliable method in detecting the residual tumors during glioma resection.

  1. High reliability - low noise radionuclide signature identification algorithms for border security applications

    Science.gov (United States)

    Lee, Sangkyu

    Illicit trafficking and smuggling of radioactive materials and special nuclear materials (SNM) are considered as one of the most important recent global nuclear threats. Monitoring the transport and safety of radioisotopes and SNM are challenging due to their weak signals and easy shielding. Great efforts worldwide are focused at developing and improving the detection technologies and algorithms, for accurate and reliable detection of radioisotopes of interest in thus better securing the borders against nuclear threats. In general, radiation portal monitors enable detection of gamma and neutron emitting radioisotopes. Passive or active interrogation techniques, present and/or under the development, are all aimed at increasing accuracy, reliability, and in shortening the time of interrogation as well as the cost of the equipment. Equally important efforts are aimed at advancing algorithms to process the imaging data in an efficient manner providing reliable "readings" of the interiors of the examined volumes of various sizes, ranging from cargos to suitcases. The main objective of this thesis is to develop two synergistic algorithms with the goal to provide highly reliable - low noise identification of radioisotope signatures. These algorithms combine analysis of passive radioactive detection technique with active interrogation imaging techniques such as gamma radiography or muon tomography. One algorithm consists of gamma spectroscopy and cosmic muon tomography, and the other algorithm is based on gamma spectroscopy and gamma radiography. The purpose of fusing two detection methodologies per algorithm is to find both heavy-Z radioisotopes and shielding materials, since radionuclides can be identified with gamma spectroscopy, and shielding materials can be detected using muon tomography or gamma radiography. These combined algorithms are created and analyzed based on numerically generated images of various cargo sizes and materials. In summary, the three detection

  2. Coronary calcium screening with dual-source CT: reliability of ungated, high-pitch chest CT in comparison with dedicated calcium-scoring CT

    Energy Technology Data Exchange (ETDEWEB)

    Hutt, Antoine; Faivre, Jean-Baptiste; Remy, Jacques; Remy-Jardin, Martine [CHRU et Universite de Lille, Department of Thoracic Imaging, Hospital Calmette (EA 2694), Lille (France); Duhamel, Alain; Deken, Valerie [CHRU et Universite de Lille, Department of Biostatistics (EA 2694), Lille (France); Molinari, Francesco [Centre Hospitalier General de Tourcoing, Department of Radiology, Tourcoing (France)

    2016-06-15

    To investigate the reliability of ungated, high-pitch dual-source CT for coronary artery calcium (CAC) screening. One hundred and eighty-five smokers underwent a dual-source CT examination with acquisition of two sets of images during the same session: (a) ungated, high-pitch and high-temporal resolution acquisition over the entire thorax (i.e., chest CT); (b) prospectively ECG-triggered acquisition over the cardiac cavities (i.e., cardiac CT). Sensitivity and specificity of chest CT for detecting positive CAC scores were 96.4 % and 100 %, respectively. There was excellent inter-technique agreement for determining the quantitative CAC score (ICC = 0.986). The mean difference between the two techniques was 11.27, representing 1.81 % of the average of the two techniques. The inter-technique agreement for categorizing patients into the four ranks of severity was excellent (weighted kappa = 0.95; 95 % CI 0.93-0.98). The inter-technique differences for quantitative CAC scores did not correlate with BMI (r = 0.05, p = 0.575) or heart rate (r = -0.06, p = 0.95); 87.2 % of them were explained by differences at the level of the right coronary artery (RCA: 0.8718; LAD: 0.1008; LCx: 0.0139; LM: 0.0136). Ungated, high-pitch dual-source CT is a reliable imaging mode for CAC screening in the conditions of routine chest CT examinations. (orig.)

  3. Design for High Performance, Low Power, and Reliable 3D Integrated Circuits

    CERN Document Server

    Lim, Sung Kyu

    2013-01-01

    This book describes the design of through-silicon-via (TSV) based three-dimensional integrated circuits.  It includes details of numerous “manufacturing-ready” GDSII-level layouts of TSV-based 3D ICs, developed with tools covered in the book. Readers will benefit from the sign-off level analysis of timing, power, signal integrity, and thermo-mechanical reliability for 3D IC designs.  Coverage also includes various design-for-manufacturability (DFM), design-for-reliability (DFR), and design-for-testability (DFT) techniques that are considered critical to the 3D IC design process. Describes design issues and solutions for high performance and low power 3D ICs, such as the pros/cons of regular and irregular placement of TSVs, Steiner routing, buffer insertion, low power 3D clock routing, power delivery network design and clock design for pre-bond testability. Discusses topics in design-for-electrical-reliability for 3D ICs, such as TSV-to-TSV coupling, current crowding at the wire-to-TSV junction and the e...

  4. Standard semiconductor packaging for high-reliability low-cost MEMS applications

    Science.gov (United States)

    Harney, Kieran P.

    2005-01-01

    Microelectronic packaging technology has evolved over the years in response to the needs of IC technology. The fundamental purpose of the package is to provide protection for the silicon chip and to provide electrical connection to the circuit board. Major change has been witnessed in packaging and today wafer level packaging technology has further revolutionized the industry. MEMS (Micro Electro Mechanical Systems) technology has created new challenges for packaging that do not exist in standard ICs. However, the fundamental objective of MEMS packaging is the same as traditional ICs, the low cost and reliable presentation of the MEMS chip to the next level interconnect. Inertial MEMS is one of the best examples of the successful commercialization of MEMS technology. The adoption of MEMS accelerometers for automotive airbag applications has created a high volume market that demands the highest reliability at low cost. The suppliers to these markets have responded by exploiting standard semiconductor packaging infrastructures. However, there are special packaging needs for MEMS that cannot be ignored. New applications for inertial MEMS devices are emerging in the consumer space that adds the imperative of small size to the need for reliability and low cost. These trends are not unique to MEMS accelerometers. For any MEMS technology to be successful the packaging must provide the basic reliability and interconnection functions, adding the least possible cost to the product. This paper will discuss the evolution of MEMS packaging in the accelerometer industry and identify the main issues that needed to be addressed to enable the successful commercialization of the technology in the automotive and consumer markets.

  5. Reliability of BOD POD Measurements Remains High After a Short-Duration Low-Carbohydrate Diet.

    Science.gov (United States)

    Greer, Beau Kjerulf; Edsall, Kathleen M; Greer, Anna E

    2016-04-01

    The purpose of the current study was to determine whether expected changes in body weight via a 3-day low-carbohydrate (LC) diet will disrupt the reliability of air displacement plethysmography measurements via BOD POD. Twenty-four subjects recorded their typical diets for 3 days before BOD POD and 7-site skinfold analyses. Subjects were matched for lean body mass and divided into low-CHO (LC) and control (CON) groups. The LC group was given instruction intended to prevent more than 50 grams/day of carbohydrate consumption for 3 consecutive days, and the CON group replicated their previously recorded diet. Body composition measurements were repeated after dietary intervention. Test-retest reliability measures were significant (p fat percentage in both the LC and the CON groups (rs = .993 and .965, respectively). Likewise, skinfold analysis for body fat percentage reliability was high in both groups (rs = .996 and .997, respectively). There were significant differences between 1st and 2nd BOD POD measurements for body mass (72.9 ± 13.3 vs. 72.1 ± 13.0 kg [M ± SD]) and body volume (69.0 ± 12.7-68.1 ± 12.2 L) in the LC group (p .05) in BOD POD-determined body fat percentage, lean body mass, or fat mass between the 1st and 2nd trial in either group. Body composition measures via BOD POD and 7-site skinfolds remain reliable after 3 days of an LC diet despite significant decreases in body mass.

  6. The ATLAS Tau Trigger

    CERN Document Server

    Rados, PK; The ATLAS collaboration

    2014-01-01

    Physics processes involving tau leptons play a crucial role in understanding particle physics at the high energy frontier. The ability to efficiently trigger on events containing hadronic tau decays is therefore of particular importance to the ATLAS experiment. During the 2012 run, the Large Hadronic Collder (LHC) reached instantaneous luminosities of nearly $10^{34} cm^{-2}s^{-1}$ with bunch crossings occurring every $50 ns$. This resulted in a huge event rate and a high probability of overlapping interactions per bunch crossing (pile-up). With this in mind it was necessary to design an ATLAS tau trigger system that could reduce the event rate to a manageable level, while efficiently extracting the most interesting physics events in a pile-up robust manner. In this poster the ATLAS tau trigger is described, its performance during 2012 is presented, and the outlook for the LHC Run II is briefly summarized.

  7. CMS Trigger Performance

    CERN Document Server

    Donato, Silvio

    2017-01-01

    During its second run of operation (Run 2) which started in 2015, the LHC will deliver a peak instantaneous luminosity that may reach $2 \\cdot 10^{34}$ cm$^{-2}$s$^{-1}$ with an average pile-up of about 55, far larger than the design value. Under these conditions, the online event selection is a very challenging task. In CMS, it is realized by a two-level trigger system the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm. In order to face this challenge, the L1 trigger has been through a major upgrade compared to Run 1, whereby all electronic boards of the system have been replaced, allowing more sophisticated algorithms to be run online. Its last stage, the global trigger, is now able to perform complex selections and to compute high-level quantities, like invariant masses. Likewise, the algorithms that run in the HLT go through big improvements; in particular, new appr...

  8. The Effects of a Positive Mindset Trigger Word Pre-Performance Routine on the Expressive Performance of Junior High Age Singers

    Science.gov (United States)

    Broomhead, Paul; Skidmore, Jon B.; Eggett, Dennis L.; Mills, Melissa M.

    2012-01-01

    The effects of a positive mindset trigger word intervention on the expressive performance of individual junior high singers were tested in this study. Participants (N = 155) were assigned randomly to a control group or an experimental group. Members of the experimental group participated in a 40-min intervention while members of the control group…

  9. Statistical estimation Monte Carlo for unreliability evaluation of highly reliable system

    International Nuclear Information System (INIS)

    Xiao Gang; Su Guanghui; Jia Dounan; Li Tianduo

    2000-01-01

    Based on analog Monte Carlo simulation, statistical Monte Carlo methods for unreliable evaluation of highly reliable system are constructed, including direct statistical estimation Monte Carlo method and weighted statistical estimation Monte Carlo method. The basal element is given, and the statistical estimation Monte Carlo estimators are derived. Direct Monte Carlo simulation method, bounding-sampling method, forced transitions Monte Carlo method, direct statistical estimation Monte Carlo and weighted statistical estimation Monte Carlo are used to evaluate unreliability of a same system. By comparing, weighted statistical estimation Monte Carlo estimator has smallest variance, and has highest calculating efficiency

  10. Performance and Reliability of Bonded Interfaces for High-temperature Packaging: Annual Progress Report

    Energy Technology Data Exchange (ETDEWEB)

    DeVoto, Douglas J. [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-19

    As maximum device temperatures approach 200 °Celsius, continuous operation, sintered silver materials promise to maintain bonds at these high temperatures without excessive degradation rates. A detailed characterization of the thermal performance and reliability of sintered silver materials and processes has been initiated for the next year. Future steps in crack modeling include efforts to simulate crack propagation directly using the extended finite element method (X-FEM), a numerical technique that uses the partition of unity method for modeling discontinuities such as cracks in a system.

  11. Mechanical Integrity Issues at MCM-Cs for High Reliability Applications

    International Nuclear Information System (INIS)

    Morgenstern, H.A.; Tarbutton, T.J.; Becka, G.A.; Uribe, F.; Monroe, S.; Burchett, S.

    1998-01-01

    During the qualification of a new high reliability low-temperature cofired ceramic (LTCC) multichip module (MCM), two issues relating to the electrical and mechanical integrity of the LTCC network were encountered while performing qualification testing. One was electrical opens after aging tests that were caused by cracks in the solder joints. The other was fracturing of the LTCC networks during mechanical testing. Through failure analysis, computer modeling, bend testing, and test samples, changes were identified. Upon implementation of all these changes, the modules passed testing, and the MCM was placed into production

  12. High-Speed Shaft Bearing Loads Testing and Modeling in the NREL Gearbox Reliability Collaborative: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    McNiff, B.; Guo, Y.; Keller, J.; Sethuraman, L.

    2014-12-01

    Bearing failures in the high speed output stage of the gearbox are plaguing the wind turbine industry. Accordingly, the National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) has performed an experimental and theoretical investigation of loads within these bearings. The purpose of this paper is to describe the instrumentation, calibrations, data post-processing and initial results from this testing and modeling effort. Measured HSS torque, bending, and bearing loads are related to model predictions. Of additional interest is examining if the shaft measurements can be simply related to bearing load measurements, eliminating the need for invasive modifications of the bearing races for such instrumentation.

  13. Design of power auto-regulating system's high reliability controller for 200 MW nuclear heating reactor

    International Nuclear Information System (INIS)

    An Zhencai; Liu Longzhi; Chen Yuan

    1996-01-01

    The paper mainly introduces power auto-regulating system's high reliability controller for 200 MW Nuclear Heating Reactor. The controller is implemented with excellent performance 16 bit single chip microcomputer 8097. Master controller and 10 digit samplers are blocked. Each and every block's hardware is identical. These blocks communicate each other through 8 bit BUS and operate synchronously by united clock and reset signal and are designed with three redundancies. The identity comparison principle through two-out-of three is also introduced. The test proves that designing scheme is feasible

  14. The human factor in operation and maintenance of complex high-reliability systems

    International Nuclear Information System (INIS)

    Ryan, T.G.

    1989-01-01

    Human factors issues in probabilistic risk assessment (PRAs) of complex high-reliability systems are addressed. These PRAs influence system operation and technical support programs such as maintainability, test, and surveillance. Using the U.S. commercial nuclear power industry as the setting, the paper addresses the manner in which PRAs currently treat human performance, the state of quantification methods and source data for analyzing human performance, and the role of human factors specialist in the analysis. The paper concludes with a presentation of TALENT, an emerging concept for fully integrating broad-based human factors expertise into the PRA process, is presented. 47 refs

  15. Creating High Reliability Teams in Healthcare through In situ Simulation Training

    Directory of Open Access Journals (Sweden)

    Kristi Miller RN

    2011-07-01

    Full Text Available The importance of teamwork on patient safety in healthcare has been well established. However, the theory and research of healthcare teams are seriously lacking in clinical application. While conventional team theory assumes that teams are stable and leadership is constant, a growing body of evidence indicates that most healthcare teams are unstable and lack constant leadership. For healthcare organizations to reduce error and ensure patient safety, the true nature of healthcare teams must be better understood. This study presents a taxonomy of healthcare teams and the determinants of high reliability in healthcare teams based on a series of studies undertaken over a five-year period (2005–2010.

  16. Highly uniform and reliable resistive switching characteristics of a Ni/WOx/p+-Si memory device

    Science.gov (United States)

    Kim, Tae-Hyeon; Kim, Sungjun; Kim, Hyungjin; Kim, Min-Hwi; Bang, Suhyun; Cho, Seongjae; Park, Byung-Gook

    2018-02-01

    In this paper, we investigate the resistive switching behavior of a bipolar resistive random-access memory (RRAM) in a Ni/WOx/p+-Si RRAM with CMOS compatibility. Highly unifrom and reliable bipolar resistive switching characteristics are observed by a DC voltage sweeping and its switching mechanism can be explained by SCLC model. As a result, the possibility of metal-insulator-silicon (MIS) structural WOx-based RRAM's application to Si-based 1D (diode)-1R (RRAM) or 1T (transistor)-1R (RRAM) structure is demonstrated.

  17. Electrocardiography-triggered high-resolution CT for reducing cardiac motion artifact. Evaluation of the extent of ground-glass attenuation in patients with idiopathic pulmonary fibrosis

    International Nuclear Information System (INIS)

    Nishiura, Motoko; Johkoh, Takeshi; Yamamoto, Shuji

    2007-01-01

    The aim of this study was to evaluate the decreasing of cardiac motion artifact and whether the extent of ground-glass attenuation of idiopathic pulmonary fibrosis (IPF) was accurately assessed by electrocardiography (ECG)-triggered high-resolution computed tomography (HRCT) by 0.5-s/rotation multidetector-row CT (MDCT). ECG-triggered HRCT were scanned at the end-diastolic phase by a MDCT scanner with the following scan parameters; axial four-slice mode, 0.5 mm collimation, 0.5-s/rotation, 120 kVp, 200 mA/rotation, high-frequency algorithm, and half reconstruction. In 42 patients with IPF, both conventional HRCT (ECG gating (-), full reconstruction) and ECG-triggered HRCT were performed at the same levels (10-mm intervals) with the above scan parameters. The correlation between percent diffusion of carbon monoxide of the lung (%DLCO) and the mean extent of ground-glass attenuation on both conventional HRCT and ECG-triggered HRCT was evaluated with the Spearman rank correlation coefficient test. The correlation between %DLCO and the mean extent of ground-glass attenuation on ECG-triggered HRCT (observer A: r=-0.790, P<0.0001; observer B: r=-0.710, P<0.0001) was superior to that on conventional HRCT (observer A: r=-0.395, P<0.05; observer B: r=-0.577, P=0.002) for both observers. ECG-triggered HRCT by 0.5 s/rotation MDCT can reduce the cardiac motion artifact and is useful for evaluating the extent of ground-glass attenuation of IPF. (author)

  18. Reliable and repeatable bonding technology for high temperature automotive power modules for electrified vehicles

    International Nuclear Information System (INIS)

    Yoon, Sang Won; Shiozaki, Koji; Glover, Michael D; Mantooth, H Alan

    2013-01-01

    This paper presents the feasibility of highly reliable and repeatable copper–tin transient liquid phase (Cu–Sn TLP) bonding as applied to die attachment in high temperature operational power modules. Electrified vehicles are attracting particular interest as eco-friendly vehicles, but their power modules are challenged because of increasing power densities which lead to high temperatures. Such high temperature operation addresses the importance of advanced bonding technology that is highly reliable (for high temperature operation) and repeatable (for fabrication of advanced structures). Cu–Sn TLP bonding is employed herein because of its high remelting temperature and desirable thermal and electrical conductivities. The bonding starts with a stack of Cu–Sn–Cu metal layers that eventually transforms to Cu–Sn alloys. As the alloys have melting temperatures (Cu 3 Sn: > 600 °C, Cu 6 Sn 5 : > 400 °C) significantly higher than the process temperature, the process can be repeated without damaging previously bonded layers. A Cu–Sn TLP bonding process was developed using thin Sn metal sheets inserted between copper layers on silicon die and direct bonded copper substrates, emulating the process used to construct automotive power modules. Bond quality is characterized using (1) proof-of-concept fabrication, (2) material identification using scanning electron microscopy and energy-dispersive x-ray spectroscopy analysis, and (3) optical analysis using optical microscopy and scanning acoustic microscope. The feasibility of multiple-sided Cu–Sn TLP bonding is demonstrated by the absence of bondline damage in multiple test samples fabricated with double- or four-sided bonding using the TLP bonding process. (paper)

  19. Remote Sensing Applications with High Reliability in Changjiang Water Resource Management

    Science.gov (United States)

    Ma, L.; Gao, S.; Yang, A.

    2018-04-01

    Remote sensing technology has been widely used in many fields. But most of the applications cannot get the information with high reliability and high accuracy in large scale, especially for the applications using automatic interpretation methods. We have designed an application-oriented technology system (PIR) composed of a series of accurate interpretation techniques,which can get over 85 % correctness in Water Resource Management from the view of photogrammetry and expert knowledge. The techniques compose of the spatial positioning techniques from the view of photogrammetry, the feature interpretation techniques from the view of expert knowledge, and the rationality analysis techniques from the view of data mining. Each interpreted polygon is accurate enough to be applied to the accuracy sensitive projects, such as the Three Gorge Project and the South - to - North Water Diversion Project. In this paper, we present several remote sensing applications with high reliability in Changjiang Water Resource Management,including water pollution investigation, illegal construction inspection, and water conservation monitoring, etc.

  20. A Step Toward High Reliability: Implementation of a Daily Safety Brief in a Children's Hospital.

    Science.gov (United States)

    Saysana, Michele; McCaskey, Marjorie; Cox, Elaine; Thompson, Rachel; Tuttle, Lora K; Haut, Paul R

    2017-09-01

    Health care is a high-risk industry. To improve communication about daily events and begin the journey toward a high reliability organization, the Riley Hospital for Children at Indiana University Health implemented a daily safety brief. Various departments in our children's hospital were asked to participate in a daily safety brief, reporting daily events and unexpected outcomes within their scope of responsibility. Participants were surveyed before and after implementation of the safety brief about communication and awareness of events in the hospital. The length of the brief and percentage of departments reporting unexpected outcomes were measured. The analysis of the presurvey and the postsurvey showed a statistically significant improvement in the questions related to the awareness of daily events as well as communication and relationships between departments. The monthly mean length of time for the brief was 15 minutes or less. Unexpected outcomes were reported by 50% of the departments for 8 months. A daily safety brief can be successfully implemented in a children's hospital. Communication between departments and awareness of daily events were improved. Implementation of a daily safety brief is a step toward becoming a high reliability organization.

  1. REMOTE SENSING APPLICATIONS WITH HIGH RELIABILITY IN CHANGJIANG WATER RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    L. Ma

    2018-04-01

    Full Text Available Remote sensing technology has been widely used in many fields. But most of the applications cannot get the information with high reliability and high accuracy in large scale, especially for the applications using automatic interpretation methods. We have designed an application-oriented technology system (PIR composed of a series of accurate interpretation techniques,which can get over 85 % correctness in Water Resource Management from the view of photogrammetry and expert knowledge. The techniques compose of the spatial positioning techniques from the view of photogrammetry, the feature interpretation techniques from the view of expert knowledge, and the rationality analysis techniques from the view of data mining. Each interpreted polygon is accurate enough to be applied to the accuracy sensitive projects, such as the Three Gorge Project and the South - to - North Water Diversion Project. In this paper, we present several remote sensing applications with high reliability in Changjiang Water Resource Management,including water pollution investigation, illegal construction inspection, and water conservation monitoring, etc.

  2. Improving Reliability of High Power Quasi-CW Laser Diode Arrays for Pumping Solid State Lasers

    Science.gov (United States)

    Amzajerdian, Farzin; Meadows, Byron L.; Baker, Nathaniel R.; Barnes, Bruce W.; Baggott, Renee S.; Lockard, George E.; Singh, Upendra N.; Kavaya, Michael J.

    2005-01-01

    Most Lidar applications rely on moderate to high power solid state lasers to generate the required transmitted pulses. However, the reliability of solid state lasers, which can operate autonomously over long periods, is constrained by their laser diode pump arrays. Thermal cycling of the active regions is considered the primary reason for rapid degradation of the quasi-CW high power laser diode arrays, and the excessive temperature rise is the leading suspect in premature failure. The thermal issues of laser diode arrays are even more drastic for 2-micron solid state lasers which require considerably longer pump pulses compared to the more commonly used pump arrays for 1-micron lasers. This paper describes several advanced packaging techniques being employed for more efficient heat removal from the active regions of the laser diode bars. Experimental results for several high power laser diode array devices will be reported and their performance when operated at long pulsewidths of about 1msec will be described.

  3. Impact of Loss Synchronization on Reliable High Speed Networks: A Model Based Simulation

    Directory of Open Access Journals (Sweden)

    Suman Kumar

    2014-01-01

    Full Text Available Contemporary nature of network evolution demands for simulation models which are flexible, scalable, and easily implementable. In this paper, we propose a fluid based model for performance analysis of reliable high speed networks. In particular, this paper aims to study the dynamic relationship between congestion control algorithms and queue management schemes, in order to develop a better understanding of the causal linkages between the two. We propose a loss synchronization module which is user configurable. We validate our model through simulations under controlled settings. Also, we present a performance analysis to provide insights into two important issues concerning 10 Gbps high speed networks: (i impact of bottleneck buffer size on the performance of 10 Gbps high speed network and (ii impact of level of loss synchronization on link utilization-fairness tradeoffs. The practical impact of the proposed work is to provide design guidelines along with a powerful simulation tool to protocol designers and network developers.

  4. Dynamic imaging with a triggered and intensified CCD camera system in a high-intensity neutron beam

    International Nuclear Information System (INIS)

    Vontobel, P.; Frei, G.; Brunner, J.; Gildemeister, A.E.; Engelhardt, M.

    2005-01-01

    When time-dependent processes within metallic structures should be inspected and visualized, neutrons are well suited due to their high penetration through Al, Ag, Ti or even steel. Then it becomes possible to inspect the propagation, distribution and evaporation of organic liquids as lubricants, fuel or water. The principle set-up of a suited real-time system was implemented and tested at the radiography facility NEUTRA of PSI. The highest beam intensity there is 2x10 7 cm -2 s -1 , which enables to observe sequences in a reasonable time and quality. The heart of the detection system is the MCP intensified CCD camera PI-Max with a Peltier cooled chip (1300x1340 pixels). The intensifier was used for both gating and image enhancement, where as the information was accumulated over many single frames on the chip before readout. Although, a 16-bit dynamic range is advertised by the camera manufacturers, it must be less due to the inherent noise level from the intensifier. The obtained result should be seen as the starting point to go ahead to fit the different requirements of car producers in respect to fuel injection, lubricant distribution, mechanical stability and operation control. Similar inspections will be possible for all devices with repetitive operation principle. Here, we report about two measurements dealing with the lubricant distribution in a running motorcycle motor turning at 1200rpm. We were monitoring the periodic stationary movements of piston, valves and camshaft with a micro-channel plate intensified CCD camera system (PI-Max 1300RB, Princeton Instruments) triggered at exactly chosen time points

  5. Trigger and decision processors

    International Nuclear Information System (INIS)

    Franke, G.

    1980-11-01

    In recent years there have been many attempts in high energy physics to make trigger and decision processes faster and more sophisticated. This became necessary due to a permanent increase of the number of sensitive detector elements in wire chambers and calorimeters, and in fact it was possible because of the fast developments in integrated circuits technique. In this paper the present situation will be reviewed. The discussion will be mainly focussed upon event filtering by pure software methods and - rather hardware related - microprogrammable processors as well as random access memory triggers. (orig.)

  6. 650-nm-band high-power and highly reliable laser diodes with a window-mirror structure

    Science.gov (United States)

    Shima, Akihiro; Hironaka, Misao; Ono, Ken-ichi; Takemi, Masayoshi; Sakamoto, Yoshifumi; Kunitsugu, Yasuhiro; Yamashita, Koji

    1998-05-01

    An active layer structure with 658 nm-emission at 25 degrees Celsius has been optimized in order to reduce the operating current of the laser diodes (LD) under high temperature condition. For improvement of the maximum output power and the reliability limited by mirror degradation, we have applied a zinc-diffused-type window-mirror structure which prevents the optical absorption at the mirror facet. As a result, the CW output power of 50 mW is obtained even at 80 degrees Celsius for a 650 micrometer-long window-mirror LD. In addition, the maximum light output power over 150 mW at 25 degrees Celsius has been realized without any optical mirror damage. In the aging tests, the LDs have been operating for over 2,500 - 5,000 hours under the CW condition of 30 - 50 mW at 60 degrees Celsius. The window-mirror structure also enables reliable 60 degree Celsius, 30 mW, CW operation of the LDs with 651 nm- emission at 25 degrees Celsius. Moreover, the maximum output power of around 100 mW even at 80 degrees Celsius and reliable 2,000-hour operation at 60 degrees Celsius, 70 mW have been realized for the first time by 659 nm LDs with a long cavity length of 900 micrometers.

  7. Reliability high cycle fatigue design of gas turbine blading system using probabilistic goodman diagram

    Energy Technology Data Exchange (ETDEWEB)

    Herman Shen, M.-H. [Ohio State Univ., Columbus, OH (United States). Dept. of Aerospace Engineering and Aviation; Nicholas, T. [MLLN, Wright-Patterson AFB, OH (United States). Air Force Research Lab.

    2001-07-01

    A framework for the probabilistic analysis of high cycle fatigue is developed. The framework will be useful to U.S. Air Force and aeroengine manufacturers in the design of high cycle fatigue in disk or compressor components fabricated from Ti-6Al-4V under a range of loading conditions that might be encountered during service. The main idea of the framework is to characterize vibratory stresses from random input variables due to uncertainties such as crack location, loading, material properties, and manufacturing variability. The characteristics of such vibratory stresses are portrayed graphically as histograms, or probability density function (PDF). The outcome of the probability measures associated with all the values of a random variable exceeding the material capability is achieved by a failure function g(X) defined by the difference between the vibratory stress and Goodman line or surface such that the probability of HCF failure is P{sub f} =P(g(X<0)). Design can then be based on a go-no go criterion based on an assumed risk. The framework can be used to facilitate the development of design tools for the prediction of inspection schedules and reliability in aeroengine components. Such tools could lead ultimately to improved life extension schemes in aging aircraft, and more reliable methods for the design and inspection of critical components. (orig.)

  8. Reliability of spring interconnects for high channel-count polyimide electrode arrays

    Science.gov (United States)

    Khan, Sharif; Ordonez, Juan Sebastian; Stieglitz, Thomas

    2018-05-01

    Active neural implants with a high channel-count need robust and reliable operational assembly for the targeted environment in order to be classified as viable fully implantable systems. The discrete functionality of the electrode array and the implant electronics is vital for intact assembly. A critical interface exists at the interconnection sites between the electrode array and the implant electronics, especially in hybrid assemblies (e.g. retinal implants) where electrodes and electronics are not on the same substrate. Since the interconnects in such assemblies cannot be hermetically sealed, reliable protection against the physiological environment is essential for delivering high insulation resistance and low defusibility of salt ions, which are limited in complexity by current assembly techniques. This work reports on a combination of spring-type interconnects on a polyimide array with silicone rubber gasket insulation for chronically active implantable systems. The spring design of the interconnects on the backend of the electrode array compensates for the uniform thickness of the sandwiched gasket during bonding in assembly and relieves the propagation of extrinsic stresses to the bulk polyimide substrate. The contact resistance of the microflex-bonded spring interconnects with the underlying metallized ceramic test vehicles and insulation through the gasket between adjacent contacts was investigated against the MIL883 standard. The contact and insulation resistances remained stable in the exhausting environmental conditions.

  9. An Embedded System for Safe, Secure and Reliable Execution of High Consequence Software

    Energy Technology Data Exchange (ETDEWEB)

    MCCOY,JAMES A.

    2000-08-29

    As more complex and functionally diverse requirements are placed on high consequence embedded applications, ensuring safe and secure operation requires an execution environment that is ultra reliable from a system viewpoint. In many cases the safety and security of the system depends upon the reliable cooperation between the hardware and the software to meet real-time system throughput requirements. The selection of a microprocessor and its associated development environment for an embedded application has the most far-reaching effects on the development and production of the system than any other element in the design. The effects of this choice ripple through the remainder of the hardware design and profoundly affect the entire software development process. While state-of-the-art software engineering principles indicate that an object oriented (OO) methodology provides a superior development environment, traditional programming languages available for microprocessors targeted for deeply embedded applications do not directly support OO techniques. Furthermore, the microprocessors themselves do not typically support nor do they enforce an OO environment. This paper describes a system level approach for the design of a microprocessor intended for use in deeply embedded high consequence applications that both supports and enforces an OO execution environment.

  10. High reliability solid refractive index matching materials for field installable connections in FTTH network

    Science.gov (United States)

    Saito, Kotaro; Kihara, Mitsuru; Shimizu, Tomoya; Yoneda, Keisuke; Kurashima, Toshio

    2015-06-01

    We performed environmental and accelerated aging tests to ensure the long-term reliability of solid type refractive index matching material at a splice point. Stable optical characteristics were confirmed in environmental tests based on an IEC standard. In an accelerated aging test at 140 °C, which is very much higher than the specification test temperature, the index matching material itself and spliced fibers passing through it had steady optical characteristics. Then we performed an accelerated aging test on an index matching material attached to a built-in fiber before splicing it in the worst condition, which is different from the normal use configuration. As a result, we confirmed that the repeated insertion and removal of fiber for splicing resulted in failure. We consider that the repetition of adhesion between index matching material and fibers causes the splice to degrade. With this result, we used the Arrhenius model to estimate a median lifetime of about 68 years in a high temperature environment of 60 °C. Thus solid type index matching material at a splice point is highly reliable over long periods under normal conditions of use.

  11. Development of Highly Reliable Power and Communication System for Essential Instruments Under Severe Accidents in NPP

    Directory of Open Access Journals (Sweden)

    Bo Hwan Choi

    2016-10-01

    Full Text Available This article proposes a highly reliable power and communication system that guarantees the protection of essential instruments in a nuclear power plant under a severe accident. Both power and communication lines are established with not only conventional wired channels, but also the proposed wireless channels for emergency reserve. An inductive power transfer system is selected due to its robust power transfer characteristics under high temperature, high pressure, and highly humid environments with a large amount of scattered debris after a severe accident. A thermal insulation box and a glass-fiber reinforced plastic box are proposed to protect the essential instruments, including vulnerable electronic circuits, from extremely high temperatures of up to 627°C and pressure of up to 5 bar. The proposed wireless power and communication system is experimentally verified by an inductive power transfer system prototype having a dipole coil structure and prototype Zigbee modules over a 7-m distance, where both the thermal insulation box and the glass-fiber reinforced plastic box are fabricated and tested using a high-temperature chamber. Moreover, an experiment on the effects of a high radiation environment on various electronic devices is conducted based on the radiation test having a maximum accumulated dose of 27 Mrad.

  12. Development of highly reliable power and communication system for essential instruments under severe accidents in NPP

    International Nuclear Information System (INIS)

    Choi, Bo Hwan; Jang, Gi Chan; Shin, Sung Min; Kang, Hyun Gook; Rim, Chun Taek; Lee, Soo Ill

    2016-01-01

    This article proposes a highly reliable power and communication system that guarantees the protection of essential instruments in a nuclear power plant under a severe accident. Both power and communication lines are established with not only conventional wired channels, but also the proposed wireless channels for emergency reserve. An inductive power transfer system is selected due to its robust power transfer characteristics under high temperature, high pressure, and highly humid environments with a large amount of scattered debris after a severe accident. A thermal insulation box and a glass-fiber reinforced plastic box are proposed to protect the essential instruments, including vulnerable electronic circuits, from extremely high temperatures of up to 627 .deg. C and pressure of up to 5 bar. The proposed wireless power and communication system is experimentally verified by an inductive power transfer system prototype having a dipole coil structure and prototype Zigbee modules over a 7-m distance, where both the thermal insulation box and the glass-fiber reinforced plastic box are fabricated and tested using a high-temperature chamber. Moreover, an experiment on the effects of a high radiation environment on various electronic devices is conducted based on the radiation test having a maximum accumulated dose of 27 Mrad

  13. Development of highly reliable power and communication system for essential instruments under severe accidents in NPP

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Bo Hwan; Jang, Gi Chan; Shin, Sung Min; Kang, Hyun Gook; Rim, Chun Taek [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Lee, Soo Ill [I and C Group, Korea Hydro and Nuclear Power Co., Ltd, Central Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    This article proposes a highly reliable power and communication system that guarantees the protection of essential instruments in a nuclear power plant under a severe accident. Both power and communication lines are established with not only conventional wired channels, but also the proposed wireless channels for emergency reserve. An inductive power transfer system is selected due to its robust power transfer characteristics under high temperature, high pressure, and highly humid environments with a large amount of scattered debris after a severe accident. A thermal insulation box and a glass-fiber reinforced plastic box are proposed to protect the essential instruments, including vulnerable electronic circuits, from extremely high temperatures of up to 627 .deg. C and pressure of up to 5 bar. The proposed wireless power and communication system is experimentally verified by an inductive power transfer system prototype having a dipole coil structure and prototype Zigbee modules over a 7-m distance, where both the thermal insulation box and the glass-fiber reinforced plastic box are fabricated and tested using a high-temperature chamber. Moreover, an experiment on the effects of a high radiation environment on various electronic devices is conducted based on the radiation test having a maximum accumulated dose of 27 Mrad.

  14. High-power and highly reliable 638-nm band BA-LD for CW operation

    Science.gov (United States)

    Nishida, Takehiro; Kuramoto, Kyosuke; Abe, Shinji; Kusunoki, Masatsugu; Miyashita, Motoharu; Yagi, Tetsuya

    2018-02-01

    High-power laser diodes (LDs) are strongly demanded as light sources of display applications. In multiple spatial light modulator-type projectors or liquid crystal displays, the light source LDs are operated under CW condition. The high-power 638-nm band broad-area LD for CW operation was newly developed. The LD consisted of two stripes with each width of 75 μm to reduce both an optical power density at a front facet and a threshold current. The newly improved epitaxial technology was also applied to the LD to suppress an electron overflow from an active layer. The LD showed superior output characteristics, such as output of 1.77 W at case temperature of 55 °C with wall plug efficiency (WPE) of 23%, which was improved by 40% compared with the current product. The peak WPE at 25 °C reached 40.6% under the output power of 2.37 W, CW, world highest.

  15. Flexible trigger menu implementation on the Global Trigger for the CMS Level-1 trigger upgrade

    Science.gov (United States)

    MATSUSHITA, Takashi; CMS Collaboration

    2017-10-01

    The CMS experiment at the Large Hadron Collider (LHC) has continued to explore physics at the high-energy frontier in 2016. The integrated luminosity delivered by the LHC in 2016 was 41 fb-1 with a peak luminosity of 1.5 × 1034 cm-2s-1 and peak mean pile-up of about 50, all exceeding the initial estimations for 2016. The CMS experiment has upgraded its hardware-based Level-1 trigger system to maintain its performance for new physics searches and precision measurements at high luminosities. The Global Trigger is the final step of the CMS Level-1 trigger and implements a trigger menu, a set of selection requirements applied to the final list of objects from calorimeter and muon triggers, for reducing the 40 MHz collision rate to 100 kHz. The Global Trigger has been upgraded with state-of-the-art FPGA processors on Advanced Mezzanine Cards with optical links running at 10 GHz in a MicroTCA crate. The powerful processing resources of the upgraded system enable implementation of more algorithms at a time than previously possible, allowing CMS to be more flexible in how it handles the available trigger bandwidth. Algorithms for a trigger menu, including topological requirements on multi-objects, can be realised in the Global Trigger using the newly developed trigger menu specification grammar. Analysis-like trigger algorithms can be represented in an intuitive manner and the algorithms are translated to corresponding VHDL code blocks to build a firmware. The grammar can be extended in future as the needs arise. The experience of implementing trigger menus on the upgraded Global Trigger system will be presented.

  16. High throughput-screening of animal urine samples: It is fast but is it also reliable?

    Science.gov (United States)

    Kaufmann, Anton

    2016-05-01

    Advanced analytical technologies like ultra-high-performance liquid chromatography coupled to high resolution mass spectrometry can be used for veterinary drug screening of animal urine. The technique is sufficiently robust and reliable to detect veterinary drugs in urine samples of animals where the maximum residue limit of these compounds in organs like muscle, kidney, or liver has been exceeded. The limitations and possibilities of the technique are discussed. The most critical point is the variability of the drug concentration ratio between the tissue and urine. Ways to manage the false positive and false negatives are discussed. The capability to confirm findings and the possibility of semi-targeted analysis are also addressed. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Test results of reliable and very high capillary multi-evaporators / condenser loop

    Energy Technology Data Exchange (ETDEWEB)

    Van Oost, S; Dubois, M; Bekaert, G [Societe Anonyme Belge de Construction Aeronautique - SABCA (Belgium)

    1997-12-31

    The paper present the results of various SABCA activities in the field of two-phase heat transport system. These results have been based on a critical review and analysis of the existing two-phase loop and of the future loop needs in space applications. The research and the development of a high capillary wick (capillary pressure up to 38 000 Pa) are described. These activities have led towards the development of a reliable high performance capillary loop concept (HPCPL), which is discussed in details. Several loop configurations mono/multi-evaporators have been ground tested. The presented results of various tests clearly show the viability of this concept for future applications. Proposed flight demonstrations as well as potential applications conclude this paper. (authors) 7 refs.

  18. Test results of reliable and very high capillary multi-evaporators / condenser loop

    Energy Technology Data Exchange (ETDEWEB)

    Van Oost, S.; Dubois, M.; Bekaert, G. [Societe Anonyme Belge de Construction Aeronautique - SABCA (Belgium)

    1996-12-31

    The paper present the results of various SABCA activities in the field of two-phase heat transport system. These results have been based on a critical review and analysis of the existing two-phase loop and of the future loop needs in space applications. The research and the development of a high capillary wick (capillary pressure up to 38 000 Pa) are described. These activities have led towards the development of a reliable high performance capillary loop concept (HPCPL), which is discussed in details. Several loop configurations mono/multi-evaporators have been ground tested. The presented results of various tests clearly show the viability of this concept for future applications. Proposed flight demonstrations as well as potential applications conclude this paper. (authors) 7 refs.

  19. Reliability studies of a high-power proton accelerator for accelerator-driven system applications for nuclear waste transmutation

    International Nuclear Information System (INIS)

    Burgazzi, Luciano; Pierini, Paolo

    2007-01-01

    The main effort of the present study is to analyze the availability and reliability of a high-performance linac (linear accelerator) conceived for Accelerator-Driven Systems (ADS) purpose and to suggest recommendations, in order both to meet the high operability goals and to satisfy the safety requirements dictated by the reactor system. Reliability Block Diagrams (RBD) approach has been considered for system modelling, according to the present level of definition of the design: component failure modes are assessed in terms of Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR), reliability and availability figures are derived, applying the current reliability algorithms. The lack of a well-established component database has been pointed out as the main issue related to the accelerator reliability assessment. The results, affected by the conservative character of the study, show a high margin for the improvement in terms of accelerator reliability and availability figures prediction. The paper outlines the viable path towards the accelerator reliability and availability enhancement process and delineates the most proper strategies. The improvement in the reliability characteristics along this path is shown as well

  20. Reliability studies of a high-power proton accelerator for accelerator-driven system applications for nuclear waste transmutation

    Energy Technology Data Exchange (ETDEWEB)

    Burgazzi, Luciano [ENEA-Centro Ricerche ' Ezio Clementel' , Advanced Physics Technology Division, Via Martiri di Monte Sole, 4, 40129 Bologna (Italy)]. E-mail: burgazzi@bologna.enea.it; Pierini, Paolo [INFN-Sezione di Milano, Laboratorio Acceleratori e Superconduttivita Applicata, Via Fratelli Cervi 201, I-20090 Segrate (MI) (Italy)

    2007-04-15

    The main effort of the present study is to analyze the availability and reliability of a high-performance linac (linear accelerator) conceived for Accelerator-Driven Systems (ADS) purpose and to suggest recommendations, in order both to meet the high operability goals and to satisfy the safety requirements dictated by the reactor system. Reliability Block Diagrams (RBD) approach has been considered for system modelling, according to the present level of definition of the design: component failure modes are assessed in terms of Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR), reliability and availability figures are derived, applying the current reliability algorithms. The lack of a well-established component database has been pointed out as the main issue related to the accelerator reliability assessment. The results, affected by the conservative character of the study, show a high margin for the improvement in terms of accelerator reliability and availability figures prediction. The paper outlines the viable path towards the accelerator reliability and availability enhancement process and delineates the most proper strategies. The improvement in the reliability characteristics along this path is shown as well.

  1. The ATLAS hadronic tau trigger

    CERN Document Server

    Black, C; The ATLAS collaboration

    2012-01-01

    With the high luminosities of proton-proton collisions achieved at the LHC, the strategies for triggering have become more important than ever for physics analysis. The naive inclusive single tau lepton triggers now suffer from severe rate limitations. To allow for a large program of physics analyses with taus, the development of topological triggers that combine tau signatures with other measured quantities in the event is required. These combined triggers open many opportunities to study new physics beyond the Standard Model and to search for the Standard Model Higgs. We present the status and performance of the hadronic tau trigger in ATLAS. We demonstrate that the ATLAS tau trigger ran remarkably well over 2011, and how the lessons learned from 2011 led to numerous improvements in the preparation of the 2012 run. These improvements include the introduction of tau selection criteria that are robust against varying pileup scenarios, and the implementation of multivariate selection techniques in the tau trig...

  2. The ATLAS hadronic tau trigger

    CERN Document Server

    Black, C; The ATLAS collaboration

    2012-01-01

    With the high luminosities of proton-proton collisions achieved at the LHC, the strategies for triggering have become more important than ever for physics analysis. The naïve inclusive single tau lepton triggers now suffer from severe rate limitations. To allow for a large program of physics analyses with taus, the development of topological triggers that combine tau signatures with other measured quantities in the event is required. These combined triggers open many opportunities to study new physics beyond the Standard Model and to search for the Standard Model Higgs. We present the status and performance of the hadronic tau trigger in ATLAS. We demonstrate that the ATLAS tau trigger ran remarkably well over 2011, and how the lessons learned from 2011 led to numerous improvements in the preparation of the 2012 run. These improvements include the introduction of tau selection criteria that are robust against varying pileup scenarios, and the implementation of multivariate selection techniques in the tau tri...

  3. Utility and reliability of non-invasive muscle function tests in high-fat-fed mice.

    Science.gov (United States)

    Martinez-Huenchullan, Sergio F; McLennan, Susan V; Ban, Linda A; Morsch, Marco; Twigg, Stephen M; Tam, Charmaine S

    2017-07-01

    What is the central question of this study? Non-invasive muscle function tests have not been validated for use in the study of muscle performance in high-fat-fed mice. What is the main finding and its importance? This study shows that grip strength, hang wire and four-limb hanging tests are able to discriminate the muscle performance between chow-fed and high-fat-fed mice at different time points, with grip strength being reliable after 5, 10 and 20 weeks of dietary intervention. Non-invasive tests are commonly used for assessing muscle function in animal models. The value of these tests in obesity, a condition where muscle strength is reduced, is unclear. We investigated the utility of three non-invasive muscle function tests, namely grip strength (GS), hang wire (HW) and four-limb hanging (FLH), in C57BL/6 mice fed chow (chow group, n = 48) or a high-fat diet (HFD group, n = 48) for 20 weeks. Muscle function tests were performed at 5, 10 and 20 weeks. After 10 and 20 weeks, HFD mice had significantly reduced GS (in newtons; mean ± SD: 10 weeks chow, 1.89 ± 0.1 and HFD, 1.79 ± 0.1; 20 weeks chow, 1.99 ± 0.1 and HFD, 1.75 ± 0.1), FLH [in seconds per gram body weight; median (interquartile range): 10 weeks chow, 2552 (1337-4964) and HFD, 1230 (749-1994); 20 weeks chow, 2048 (765-3864) and HFD, 1036 (717-1855)] and HW reaches [n; median (interquartile range): 10 weeks chow, 4 (2-5) and HFD, 2 (1-3); 20 weeks chow, 3 (1-5) and HFD, 1 (0-2)] and higher falls [n; median (interquartile range): 10 weeks chow, 0 (0-2) and HFD, 3 (1-7); 20 weeks chow, 1 (0-4) and HFD, 8 (5-10)]. Grip strength was reliable in both dietary groups [intraclass correlation coefficient (ICC) = 0.5-0.8; P tests are valuable and reliable tools for assessment of muscle strength and function in high-fat-fed mice. © 2017 The Authors. Experimental Physiology © 2017 The Physiological Society.

  4. High resolution MR imaging of the fetal heart with cardiac triggering: a feasibility study in the sheep fetus

    Energy Technology Data Exchange (ETDEWEB)

    Yamamura, Jin; Frisch, Michael; Adam, Gerhard; Wedegaertner, Ulrike [University Hospital Hamburg-Eppendorf, Department of Diagnostic and Interventional Radiology, Hamburg (Germany); Schnackenburg, Bernhard; Kooijmann, Hendrik [Philips Medical Systems, Hamburg (Germany); Hecher, Kurt [University Hospital Hamburg-Eppendorf, Department of Obstetrics and Fetal Medicine, Hamburg (Germany)

    2009-10-15

    The aim of this study was to perform fetal cardiac magnetic resonance imaging (MRI) with triggering of the fetal heart beat in utero in a sheep model. All experimental protocols were reviewed and the usage of ewes and fetuses was approved by the local animal protection authorities. Images of the hearts of six pregnant ewes were obtained by using a 1.5-T MR system (Philips Medical Systems, Best, Netherlands). The fetuses were chronically instrumented with a carotid catheter to measure the fetal heart frequency for the cardiac triggering. Pulse wave triggered, breath-hold cine-MRI with steady-state free precession (SSFP) was achieved in short axis, two-, four- and three-chamber views. The left ventricular volume and thus the function were measured from the short axis. The fetal heart frequencies ranged between 130 and 160 bpm. The mitral, tricuspid, aortic, and pulmonary valves could be clearly observed. The foramen ovale could be visualized. Myocardial contraction was shown in cine sequences. The average blood volume at the end systole was 3.4{+-}0.2 ml ({+-} SD). The average volume at end diastole was 5.2{+-}0.2 ml; thus the stroke volumes of the left ventricle in the systole were between 1.7 and 1.9 ml with ejection fractions of 38.6% and 39%, respectively. The pulse wave triggered cardiac MRI of the fetal heart allowed evaluation of anatomical structures and functional information. This feasibility study demonstrates the applicability of MRI for future evaluation of fetuses with complex congenital heart defects, once a noninvasive method has been developed to perform fetal cardiac triggering. (orig.)

  5. Seven Reliability Indices for High-Stakes Decision Making: Description, Selection, and Simple Calculation

    Science.gov (United States)

    Smith, Stacey L.; Vannest, Kimberly J.; Davis, John L.

    2011-01-01

    The reliability of data is a critical issue in decision-making for practitioners in the school. Percent Agreement and Cohen's kappa are the two most widely reported indices of inter-rater reliability, however, a recent Monte Carlo study on the reliability of multi-category scales found other indices to be more trustworthy given the type of data…

  6. Reliable high-power diode lasers: thermo-mechanical fatigue aspects

    Science.gov (United States)

    Klumel, Genady; Gridish, Yaakov; Szafranek, Igor; Karni, Yoram

    2006-02-01

    High power water-cooled diode lasers are finding increasing demand in biomedical, cosmetic and industrial applications, where repetitive cw (continuous wave) and pulsed cw operation modes are required. When operating in such modes, the lasers experience numerous complete thermal cycles between "cold" heat sink temperature and the "hot" temperature typical of thermally equilibrated cw operation. It is clearly demonstrated that the main failure mechanism directly linked to repetitive cw operation is thermo-mechanical fatigue of the solder joints adjacent to the laser bars, especially when "soft" solders are used. Analyses of the bonding interfaces were carried out using scanning electron microscopy. It was observed that intermetallic compounds, formed already during the bonding process, lead to the solders fatigue both on the p- and n-side of the laser bar. Fatigue failure of solder joints in repetitive cw operation reduces useful lifetime of the stacks to hundreds hours, in comparison with more than 10,000 hours lifetime typically demonstrated in commonly adopted non-stop cw reliability testing programs. It is shown, that proper selection of package materials and solders, careful design of fatigue sensitive parts and burn-in screening in the hard pulse operation mode allow considerable increase of lifetime and reliability, without compromising the device efficiency, optical power density and compactness.

  7. A Highly Reliable and Cost-Efficient Multi-Sensor System for Land Vehicle Positioning

    Directory of Open Access Journals (Sweden)

    Xu Li

    2016-05-01

    Full Text Available In this paper, we propose a novel positioning solution for land vehicles which is highly reliable and cost-efficient. The proposed positioning system fuses information from the MEMS-based reduced inertial sensor system (RISS which consists of one vertical gyroscope and two horizontal accelerometers, low-cost GPS, and supplementary sensors and sources. First, pitch and roll angle are accurately estimated based on a vehicle kinematic model. Meanwhile, the negative effect of the uncertain nonlinear drift of MEMS inertial sensors is eliminated by an H∞ filter. Further, a distributed-dual-H∞ filtering (DDHF mechanism is adopted to address the uncertain nonlinear drift of the MEMS-RISS and make full use of the supplementary sensors and sources. The DDHF is composed of a main H∞ filter (MHF and an auxiliary H∞ filter (AHF. Finally, a generalized regression neural network (GRNN module with good approximation capability is specially designed for the MEMS-RISS. A hybrid methodology which combines the GRNN module and the AHF is utilized to compensate for RISS position errors during GPS outages. To verify the effectiveness of the proposed solution, road-test experiments with various scenarios were performed. The experimental results illustrate that the proposed system can achieve accurate and reliable positioning for land vehicles.

  8. Highly Reliable Organizations in the Onshore Natural Gas Sector: An Assessment of Current Practices, Regulatory Frameworks, and Select Case Studies

    Energy Technology Data Exchange (ETDEWEB)

    Logan, Jeffrey S. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Paranhos, Elizabeth [Energy Innovation Partners, Seattle, WA (United States); Kozak, Tracy G. [Energy Innovation Partners, Seattle, WA (United States); Boyd, William [Univ. of Colorado, Boulder, CO (United States)

    2017-07-31

    This study focuses on onshore natural gas operations and examines the extent to which oil and gas firms have embraced certain organizational characteristics that lead to 'high reliability' - understood here as strong safety and reliability records over extended periods of operation. The key questions that motivated this study include whether onshore oil and gas firms engaged in exploration and production (E&P) and midstream (i.e., natural gas transmission and storage) are implementing practices characteristic of high reliability organizations (HROs) and the extent to which any such practices are being driven by industry innovations and standards and/or regulatory requirements.

  9. Reliability of supply of switchgear for auxiliary low voltage in substations extra high voltage to high voltage

    Directory of Open Access Journals (Sweden)

    Perić Dragoslav M.

    2015-01-01

    Full Text Available Switchgear for auxiliary low voltage in substations (SS of extra high voltages (EHV to high voltage (HV - SS EHV/HV kV/kV is of special interest for the functioning of these important SS, as it provides a supply for system of protection and other vital functions of SS. The article addresses several characteristic examples involving MV lines with varying degrees of independence of their supply, and the possible application of direct transformation EHV/LV through special voltage transformers. Auxiliary sources such as inverters and diesel generators, which have limited power and expensive energy, are also used for the supply of switchgear for auxiliary low voltage. Corresponding reliability indices are calculated for all examples including mean expected annual engagement of diesel generators. The applicability of certain solutions of switchgear for auxiliary low voltage SS EHV/HV, taking into account their reliability, feasibility and cost-effectiveness is analyzed too. In particular, the analysis of applications of direct transformation EHV/LV for supply of switchgear for auxiliary low voltage, for both new and existing SS EHV/HV.

  10. Quantification of the occurrence of common-mode faults in highly reliable protective systems

    International Nuclear Information System (INIS)

    Aitken, A.

    1978-10-01

    The report first covers the investigation, definition and classification of common mode failure (CMF) based on an extensive study of the nature of CMF. A new classification of CMF is proposed, based on possible causes of failures. This is used as a basis for analysing data from reported failures of reactor safety systems and aircraft systems. Design and maintenance errors are shown to be predominant cause of CMF. The estimated CMF rates for the highly reliable nuclear power plant automatic protection system (APS) and for the emergency core cooling system (ECCS) are 2.8.10 -2 CMF/sub-system-year and 3.3.10 -2 CMF/sub-system-year respectively. For comparison, the data from the aircraft accident records have shown a CMF rate for total flight control system (FCS), 2.1.10 -5 CMF/sub-system-year. The analysis has laid the grounds for work on relating CMF modelling and defences

  11. Application of high efficiency and reliable 3D-designed integral shrouded blades to nuclear turbines

    International Nuclear Information System (INIS)

    Watanabe, Eiichiro; Ohyama, Hiroharu; Tashiro, Hikaru; Sugitani, Toshiro; Kurosawa, Masaru

    1998-01-01

    Mitsubishi Heavy Industries, Ltd. has recently developed new blades for nuclear turbines, in order to achieve higher efficiency and higher reliability. The 3D aerodynamic design for 41 inch and 46 inch blades, their one piece structural design (integral-shrouded blades: ISB), and the verification test results using a model steam turbine are described in this paper. The predicted efficiency and lower vibratory stress have been verified. Based on these 60Hz ISB, 50Hz ISB series are under development using 'the law of similarity' without changing their thermodynamic performance and mechanical stress levels. Our 3D-designed reaction blades which are used for the high pressure and low pressure upstream stages, are also briefly mentioned. (author)

  12. Enertech 2-kW high-reliability wind system. Phase II. Fabrication and testing

    Energy Technology Data Exchange (ETDEWEB)

    Cordes, J A; Johnson, B A

    1981-06-01

    A high-reliability wind machine rated for 2 kW in a 9 m/s wind has been developed. Activities are summarized that are centered on the fabrication and testing of prototypes of the wind machine. The test results verified that the wind machine met the power output specification and that the variable-pitch rotor effectively controlled the rotor speed for wind speeds up to 50 mph. Three prototypes of the wind machine were shipped to the Rocky Flats test center in September through November of 1979. Work was also performed to reduce the start-up wind speed. The start-up wind speed to the Enertech facility has been reduced to 4.5 m/s.

  13. Functional components for a design strategy: Hot cell shielding in the high reliability safeguards methodology

    Energy Technology Data Exchange (ETDEWEB)

    Borrelli, R.A., E-mail: rborrelli@uidaho.edu

    2016-08-15

    The high reliability safeguards (HRS) methodology has been established for the safeguardability of advanced nuclear energy systems (NESs). HRS is being developed in order to integrate safety, security, and safeguards concerns, while also optimizing these with operational goals for facilities that handle special nuclear material (SNM). Currently, a commercial pyroprocessing facility is used as an example system. One of the goals in the HRS methodology is to apply intrinsic features of the system to a design strategy. This current study investigates the thickness of the hot cell walls that could adequately shield processed materials. This is an important design consideration that carries implications regarding the formation of material balance areas, the location of key measurement points, and material flow in the facility.

  14. Reliability and Maintainability Analysis of a High Air Pressure Compressor Facility

    Science.gov (United States)

    Safie, Fayssal M.; Ring, Robert W.; Cole, Stuart K.

    2013-01-01

    This paper discusses a Reliability, Availability, and Maintainability (RAM) independent assessment conducted to support the refurbishment of the Compressor Station at the NASA Langley Research Center (LaRC). The paper discusses the methodologies used by the assessment team to derive the repair by replacement (RR) strategies to improve the reliability and availability of the Compressor Station (Ref.1). This includes a RAPTOR simulation model that was used to generate the statistical data analysis needed to derive a 15-year investment plan to support the refurbishment of the facility. To summarize, study results clearly indicate that the air compressors are well past their design life. The major failures of Compressors indicate that significant latent failure causes are present. Given the occurrence of these high-cost failures following compressor overhauls, future major failures should be anticipated if compressors are not replaced. Given the results from the RR analysis, the study team recommended a compressor replacement strategy. Based on the data analysis, the RR strategy will lead to sustainable operations through significant improvements in reliability, availability, and the probability of meeting the air demand with acceptable investment cost that should translate, in the long run, into major cost savings. For example, the probability of meeting air demand improved from 79.7 percent for the Base Case to 97.3 percent. Expressed in terms of a reduction in the probability of failing to meet demand (1 in 5 days to 1 in 37 days), the improvement is about 700 percent. Similarly, compressor replacement improved the operational availability of the facility from 97.5 percent to 99.8 percent. Expressed in terms of a reduction in system unavailability (1 in 40 to 1 in 500), the improvement is better than 1000 percent (an order of magnitude improvement). It is worthy to note that the methodologies, tools, and techniques used in the LaRC study can be used to evaluate

  15. Triggering and guiding high-voltage large-scale leader discharges with sub-joule ultrashort laser pulses

    International Nuclear Information System (INIS)

    Pepin, H.; Comtois, D.; Vidal, F.; Chien, C.Y.; Desparois, A.; Johnston, T.W.; Kieffer, J.C.; La Fontaine, B.; Martin, F.; Rizk, F.A.M.; Potvin, C.; Couture, P.; Mercure, H.P.; Bondiou-Clergerie, A.; Lalande, P.; Gallimberti, I.

    2001-01-01

    The triggering and guiding of leader discharges using a plasma channel created by a sub-joule ultrashort laser pulse have been studied in a megavolt large-scale electrode configuration (3-7 m rod-plane air gap). By focusing the laser close to the positive rod electrode it has been possible, with a 400 mJ pulse, to trigger and guide leaders over distances of 3 m, to lower the leader inception voltage by 50%, and to increase the leader velocity by a factor of 10. The dynamics of the breakdown discharges with and without the laser pulse have been analyzed by means of a streak camera and of electric field and current probes. Numerical simulations have successfully reproduced many of the experimental results obtained with and without the presence of the laser plasma channel

  16. Development of high-reliable real-time communication network protocol for SMART

    Energy Technology Data Exchange (ETDEWEB)

    Song, Ki Sang; Kim, Young Sik [Korea National University of Education, Chongwon (Korea); No, Hee Chon [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    1999-04-01

    In this research, we first define protocol subsets for SMART(System-integrated Modular Advanced Reactor) communication network based on the requirement of SMART MMIS transmission delay and traffic requirements and OSI(Open System Interconnection) 7 layers' network protocol functions. Also, current industrial purpose LAN protocols are analyzed and the applicability of commercialized protocols are checked. For the suitability test, we have applied approximated SMART data traffic and maximum allowable transmission delay requirement. With the simulation results, we conclude that IEEE 802.5 and FDDI which is an ANSI standard, is the most suitable for SMART. We further analyzed the FDDI and token ring protocols for SMART and nuclear plant network environment including IEEE 802.4, IEEE 802.5, and ARCnet. The most suitable protocol for SMART is FDDI and FDDI MAC and RMT protocol specifications have been verified with LOTOS and the verification results show that FDDI MAC and RMT satisfy the reachability and liveness, but does not show deadlock and livelock. Therefore, we conclude that FDDI MAC and RMT is highly reliable protocol for SMART MMIS network. After that, we consider the stacking fault of IEEE 802.5 token ring protocol and propose a fault tolerant MAM(Modified Active Monitor) protocol. The simulation results show that the MAM protocol improves lower priority traffic service rate when stacking fault occurs. Therefore, proposed MAM protocol can be applied to SMART communication network for high reliability and hard real-time communication purpose in data acquisition and inter channel network. (author). 37 refs., 79 figs., 39 tabs.

  17. Short-Term and Medium-Term Reliability Evaluation for Power Systems With High Penetration of Wind Power

    DEFF Research Database (Denmark)

    Ding, Yi; Singh, Chanan; Goel, Lalit

    2014-01-01

    reliability evaluation techniques for power systems are well developed. These techniques are more focused on steady-state (time-independent) reliability evaluation and have been successfully applied in power system planning and expansion. In the operational phase, however, they may be too rough......The expanding share of the fluctuating and less predictable wind power generation can introduce complexities in power system reliability evaluation and management. This entails a need for the system operator to assess the system status more accurately for securing real-time balancing. The existing...... an approximation of the time-varying behavior of power systems with high penetration of wind power. This paper proposes a time-varying reliability assessment technique. Time-varying reliability models for wind farms, conventional generating units, and rapid start-up generating units are developed and represented...

  18. Highly Conductive and Reliable Copper-Filled Isotropically Conductive Adhesives Using Organic Acids for Oxidation Prevention

    Science.gov (United States)

    Chen, Wenjun; Deng, Dunying; Cheng, Yuanrong; Xiao, Fei

    2015-07-01

    The easy oxidation of copper is one critical obstacle to high-performance copper-filled isotropically conductive adhesives (ICAs). In this paper, a facile method to prepare highly reliable, highly conductive, and low-cost ICAs is reported. The copper fillers were treated by organic acids for oxidation prevention. Compared with ICA filled with untreated copper flakes, the ICA filled with copper flakes treated by different organic acids exhibited much lower bulk resistivity. The lowest bulk resistivity achieved was 4.5 × 10-5 Ω cm, which is comparable to that of commercially available Ag-filled ICA. After 500 h of 85°C/85% relative humidity (RH) aging, the treated ICAs showed quite stable bulk resistivity and relatively stable contact resistance. Through analyzing the results of x-ray diffraction, x-ray photoelectron spectroscopy, and thermogravimetric analysis, we found that, with the assistance of organic acids, the treated copper flakes exhibited resistance to oxidation, thus guaranteeing good performance.

  19. A reliable and consistent production technology for high volume compacted graphite iron castings

    Directory of Open Access Journals (Sweden)

    Liu Jincheng

    2014-07-01

    Full Text Available The demands for improved engine performance, fuel economy, durability, and lower emissions provide a continual challenge for engine designers. The use of Compacted Graphite Iron (CGI has been established for successful high volume series production in the passenger vehicle, commercial vehicle and industrial power sectors over the last decade. The increased demand for CGI engine components provides new opportunities for the cast iron foundry industry to establish efficient and robust CGI volume production processes, in China and globally. The production window range for stable CGI is narrow and constantly moving. Therefore, any one step single addition of magnesium alloy and the inoculant cannot ensure a reliable and consistent production process for complicated CGI engine castings. The present paper introduces the SinterCast thermal analysis process control system that provides for the consistent production of CGI with low nodularity and reduced porosity, without risking the formation of flake graphite. The technology is currently being used in high volume Chinese foundry production. The Chinese foundry industry can develop complicated high demand CGI engine castings with the proper process control technology.

  20. Dating of zircon from high-grade rocks: Which is the most reliable method?

    Directory of Open Access Journals (Sweden)

    Alfred Kröner

    2014-07-01

    Full Text Available Magmatic zircon in high-grade metamorphic rocks is often characterized by complex textures as revealed by cathodoluminenscence (CL that result from multiple episodes of recrystallization, overgrowth, Pb-loss and modifications through fluid-induced disturbances of the crystal structure and the original U-Th-Pb isotopic systematics. Many of these features can be recognized in 2-dimensional CL images, and isotopic analysis of such domains using a high resolution ion-microprobe with only shallow penetration of the zircon surface may be able to reconstruct much of the magmatic and complex post-magmatic history of such grains. In particular it is generally possible to find original magmatic domains yielding concordant ages. In contrast, destructive techniques such as LA-ICP-MS consume a large volume, leave a deep crater in the target grain, and often sample heterogeneous domains that are not visible and thus often yield discordant results which are difficult to interpret. We provide examples of complex magmatic zircon from a southern Indian granulite terrane where SHRIMP II and LA-ICP-MS analyses are compared. The SHRIMP data are shown to be more precise and reliable, and we caution against the use of LA-ICP-MS in deciphering the chronology of complex zircons from high-grade terranes.

  1. Highly Reliable Power and Communication System for Essential Instruments under a Severe Accident of NPPs

    International Nuclear Information System (INIS)

    Yoo, S. J.; Choi, B. H.; Jung, S. Y.; Rim, Chun T.

    2013-01-01

    In this paper, three survivable strategies to overcome the problems listed above are proposed for the essential instruments under the severe accident of NPPs. First, wire/wireless multi power systems are adopted to the essential instruments for continuous power supply. Second, wire/wireless communication systems are proposed for reliable transmission of measuring information among instruments and operators. Third, a physical protection system such as a harness and a heat isolation box is introduced to ensure operable conditions for the proposed systems. In this paper, a highly reliable strategy, which consists of wire/wireless multi power and communication systems and physical protection system is proposed to ensure the survival of the essential instruments under harsh external conditions. The wire/wireless multi power and communication systems are designed to transfer power and data in spite of the failure of conventional wired systems. The physical protection system provides operable environments to the instruments. Therefore, the proposed system can be considered as a candidate of practical and urgent remedy for NPPs under the severe accident. After the Fukushima nuclear accident, survivability of essential instruments has been emphasized for immediate and accurate response. The essential instruments can measure environment conditions such as temperature, pressure, radioactivity and corium behavior inside nuclear power plants (NPPs) under a severe accident. Access to the inside of NPPs is restricted to human beings because of hazardous environment such as high radioactivity, high temperature and high pressure. Thus, monitoring the inside of NPPs is necessary for avoiding damage from the severe accident. Even though there were a number of instruments in Fukushima Daiichi NPP, they failed to obtain exact monitoring information. According to the details of the Fukushima nuclear accident, following problems can be counted as strong candidates of this instruments

  2. Highly Reliable Power and Communication System for Essential Instruments under a Severe Accident of NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, S. J.; Choi, B. H.; Jung, S. Y.; Rim, Chun T. [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2013-10-15

    In this paper, three survivable strategies to overcome the problems listed above are proposed for the essential instruments under the severe accident of NPPs. First, wire/wireless multi power systems are adopted to the essential instruments for continuous power supply. Second, wire/wireless communication systems are proposed for reliable transmission of measuring information among instruments and operators. Third, a physical protection system such as a harness and a heat isolation box is introduced to ensure operable conditions for the proposed systems. In this paper, a highly reliable strategy, which consists of wire/wireless multi power and communication systems and physical protection system is proposed to ensure the survival of the essential instruments under harsh external conditions. The wire/wireless multi power and communication systems are designed to transfer power and data in spite of the failure of conventional wired systems. The physical protection system provides operable environments to the instruments. Therefore, the proposed system can be considered as a candidate of practical and urgent remedy for NPPs under the severe accident. After the Fukushima nuclear accident, survivability of essential instruments has been emphasized for immediate and accurate response. The essential instruments can measure environment conditions such as temperature, pressure, radioactivity and corium behavior inside nuclear power plants (NPPs) under a severe accident. Access to the inside of NPPs is restricted to human beings because of hazardous environment such as high radioactivity, high temperature and high pressure. Thus, monitoring the inside of NPPs is necessary for avoiding damage from the severe accident. Even though there were a number of instruments in Fukushima Daiichi NPP, they failed to obtain exact monitoring information. According to the details of the Fukushima nuclear accident, following problems can be counted as strong candidates of this instruments

  3. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  4. The LPS trigger system

    International Nuclear Information System (INIS)

    Benotto, F.; Costa, M.; Staiano, A.; Zampieri, A.; Bollito, M.; Isoardi, P.; Pernigotti, E.; Sacchi, R.; Trapani, P.P.; Larsen, H.; Massam, T.; Nemoz, C.

    1996-03-01

    The Leading Proton Spectrometer (LPS) has been equipped with microstrip silicon detectors specially designed to trigger events with high values of x L vertical stroke anti p' p vertical stroke / vertical stroke anti p p vertical stroke ≥0.95 where vertical stroke anti p' p vertical stroke and vertical stroke anti p p vertical stroke are respectively the momenta of outgoing and incoming protons. The LPS First Level Trigger can provide a clear tag for very high momentum protons in a kinematical region never explored before. In the following we discuss the physics motivation in tagging very forward protons and present a detailed description of the detector design, the front end electronics, the readout electronics, the Monte Carlo simulation and some preliminary results from 1995 data taking. (orig.)

  5. Development of high reliability dual redundant FADEC. Koshinraisei nijukei FADEC no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    Endo, M [Ishikawajima-Harima Heavy Industries, Co. Ltd., Tokyo (Japan)

    1994-05-01

    The control unit of gas turbine for the aircraft jet engine use must draw overall performance in compliance with the thrust commanded by the pilot under all flying conditions of the engine. High reliability is required in flight safety. The present paper explains a developed unit of dual redundant FADEC (full authority digital electronic control) and high-density mounting technology for the electronic devices required by the FADEC. The FADEC unit is composed of two hardware systems with their respective microprocessors of the same structure. Each of both systems can solely control the engine, while they are both commanded by a necessary signal for the dual system operation and connected through a digital by-pass which can exchange the input/output data between them. For the operational confirmation of FADEC unit, its control characteristics were inspected by intentionally putting it out of order at the time of engine acceleration/deceleration and other transient operations. The control system could be switched without control characteristics lost of the engine. 9 figs., 2 tabs.

  6. Reliable discrimination of 10 ungulate species using high resolution melting analysis of faecal DNA.

    Directory of Open Access Journals (Sweden)

    Ana Ramón-Laca

    Full Text Available Identifying species occupying an area is essential for many ecological and conservation studies. Faecal DNA is a potentially powerful method for identifying cryptic mammalian species. In New Zealand, 10 species of ungulate (Order: Artiodactyla have established wild populations and are managed as pests because of their impacts on native ecosystems. However, identifying the ungulate species present within a management area based on pellet morphology is unreliable. We present a method that enables reliable identification of 10 ungulate species (red deer, sika deer, rusa deer, fallow deer, sambar deer, white-tailed deer, Himalayan tahr, Alpine chamois, feral sheep, and feral goat from swabs of faecal pellets. A high resolution melting (HRM assay, targeting a fragment of the 12S rRNA gene, was developed. Species-specific primers were designed and combined in a multiplex PCR resulting in fragments of different length and therefore different melting behaviour for each species. The method was developed using tissue from each of the 10 species, and was validated in blind trials. Our protocol enabled species to be determined for 94% of faecal pellet swabs collected during routine monitoring by the New Zealand Department of Conservation. Our HRM method enables high-throughput and cost-effective species identification from low DNA template samples, and could readily be adapted to discriminate other mammalian species from faecal DNA.

  7. Towards high-reliability organising in healthcare: a strategy for building organisational capacity.

    Science.gov (United States)

    Aboumatar, Hanan J; Weaver, Sallie J; Rees, Dianne; Rosen, Michael A; Sawyer, Melinda D; Pronovost, Peter J

    2017-08-01

    In a high-reliability organisation (HRO), safety and quality (SQ) is an organisational priority, and all workforce members are engaged, continuously learning and improving their work. To build organisational capacity for SQ work, we have developed a role-tailored capacity-building framework that we are currently employing at the Johns Hopkins Armstrong Institute for Patient Safety and Quality as part of an organisational strategy towards HRO. This framework considers organisation-wide competencies for SQ that includes all staff and faculty and is integrated into a broader organisation-wide operating management system for improving quality. In this framework, achieving safe, high-quality care is connected to healthcare workforce preparedness. Capacity-building efforts are tailored to the needs of distinct groups within the workforce that fall within three categories: (1) front-line providers and staff, (2) managers and local improvement personnel and (3) SQ leaders and experts. In this paper we describe this framework, our implementation efforts to date, challenges met and lessons learnt. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  8. Design and Analysis of Transport Protocols for Reliable High-Speed Communications

    NARCIS (Netherlands)

    Oláh, A.

    1997-01-01

    The design and analysis of transport protocols for reliable communications constitutes the topic of this dissertation. These transport protocols guarantee the sequenced and complete delivery of user data over networks which may lose, duplicate and reorder packets. Reliable transport services are

  9. High inter-tester reliability of the new mobility score in patients with hip fracture

    DEFF Research Database (Denmark)

    Kristensen, M.T.; Bandholm, T.; Foss, N.B.

    2008-01-01

    OBJECTIVE: To assess the inter-tester reliability of the New Mobility Score in patients with acute hip fracture. DESIGN: An inter-tester reliability study. SUBJECTS: Forty-eight consecutive patients with acute hip fracture at a median age of 84 (interquartile range, 76-89) years; 40 admitted from...

  10. Reliability of high mobility SiGe channel MOSFETs for future CMOS applications

    CERN Document Server

    Franco, Jacopo; Groeseneken, Guido

    2014-01-01

    Due to the ever increasing electric fields in scaled CMOS devices, reliability is becoming a showstopper for further scaled technology nodes. Although several groups have already demonstrated functional Si channel devices with aggressively scaled Equivalent Oxide Thickness (EOT) down to 5Å, a 10 year reliable device operation cannot be guaranteed anymore due to severe Negative Bias Temperature Instability. This book focuses on the reliability of the novel (Si)Ge channel quantum well pMOSFET technology. This technology is being considered for possible implementation in next CMOS technology nodes, thanks to its benefit in terms of carrier mobility and device threshold voltage tuning. We observe that it also opens a degree of freedom for device reliability optimization. By properly tuning the device gate stack, sufficiently reliable ultra-thin EOT devices with a 10 years lifetime at operating conditions are demonstrated. The extensive experimental datasets collected on a variety of processed 300mm wafers and pr...

  11. Design and testing of the high speed signal densely populated ATLAS calorimeter trigger board dedicate to jet identification

    CERN Document Server

    Vieira De Souza, Julio; The ATLAS collaboration

    2017-01-01

    Abstract—The ATLAS experiment has planned a major upgrade in view of the enhanced luminosity of the beam delivered by the Large Hadron Collider (LHC) in 2021. As part of this, the trigger at Level-1 based on calorimeter data will be upgraded to exploit fine-granularity readout using a new system of Feature Extractors (three in total), which each uses different physics objects for the trigger selection. The contribution focusses on the jet Feature EXtractor (jFEX) prototype. Up to a data volume of 2 TB/s has to be processed to provide jet identification (including large area jets) and measurements of global variables within few hundred nanoseconds latency budget. Such requirements translate into the use of large Field Programmable Gate Array (FPGA) with the largest number of Multi Gigabit Transceivers (MGTs) available on the market. The jFEX board prototype hosts four large FPGAs from the Xilinx Ultrascale family with 120 MGTs each, connected to 24 opto-electrical devices, resulting in a densely populated hi...

  12. TileCal Trigger Tower studies considering additional segmentation on the ATLAS upgrade for high luminosity at LHC

    CERN Document Server

    March, L; The ATLAS collaboration

    2013-01-01

    The Tile Calorimeter (TileCal) is the hadronic calorimeter covering the most central region of the ATLAS experiment at LHC. The TileCal readout consists of about 10000 channels and provides a compact information, called trigger towers (around 2000 signals), to the ATLAS first level online event selection system. The ATLAS upgrade program is divided in three phases: Phase 0 occurs during 2013- 2014 and prepares the LHC to reach peak luminosities of 10^34 cm2s-1; Phase 1, foreseen for 2018-1019, prepares the LHC for peak luminosity up to 2-3 x 10^34 cm2s-1, corresponding to 55 to 80 interactions per bunch-crossing with 25 ns bunch interval; and Phase 2 is foreseen for 2022-2023, whereafter the peak luminosity will reach 5-7 x 1034 cm2s-1 (HL-LHC). The ATLAS experiment is operating very well since 2009 providing large amount of data for physics analysis. The online event selection system (trigger system) was designed to reject the huge amount of background noise generated at LHC and is one of the main systems re...

  13. Quasi-Optical Network Analyzers and High-Reliability RF MEMS Switched Capacitors

    Science.gov (United States)

    Grichener, Alexander

    The thesis first presents a 2-port quasi-optical scalar network analyzer consisting of a transmitter and receiver both built in planar technology. The network analyzer is based on a Schottky-diode mixer integrated inside a planar antenna and fed differentially by a CPW transmission line. The antenna is placed on an extended hemispherical high-resistivity silicon substrate lens. The LO signal is swept from 3-5 GHz and high-order harmonic mixing in both up- and down- conversion mode is used to realize the 15-50 GHz RF bandwidth. The network analyzer resulted in a dynamic range of greater than 40 dB and was successfully used to measure a frequency selective surface with a second-order bandpass response. Furthermore, the system was built with circuits and components for easy scaling to millimeter-wave frequencies which is the primary motivation for this work. The application areas for a millimeter and submillimeter-wave network analyzer include material characterization and art diagnostics. The second project presents several RF MEMS switched capacitors designed for high-reliability operation and suitable for tunable filters and reconfigurable networks. The first switched-capacitor resulted in a digital capacitance ratio of 5 and an analog capacitance ratio of 5-9. The analog tuning of the down-state capacitance is enhanced by a positive vertical stress gradient in the the beam, making it ideal for applications that require precision tuning. A thick electroplated beam resulted in Q greater than 100 at C to X-band frequencies, and power handling of 0.6-1.1 W. The design also minimized charging in the dielectric, resulting in excellent reliability performance even under hot-switched and high power (1 W) conditions. The second switched-capacitor was designed without any dielectric to minimize charging. The device was hot-switched at 1 W of RF power for greater than 11 billion cycles with virtually no change in the C-V curve. The final project presents a 7-channel

  14. Fast neutrons: Inexpensive and reliable tool to investigate high-LET particle radiobiology

    International Nuclear Information System (INIS)

    Gueulette, J.; Slabbert, J.P.; Bischoff, P.; Denis, J.M.; Wambersie, A.; Jones, D.

    2010-01-01

    Radiation therapy with carbon ions as well as missions into outer space have boosted the interest for high-LET particle radiobiology. Optimization of treatments in accordance with technical developments, as well as the radioprotection of cosmonauts during long missions require that research in these domains continue. Therefore suitable radiation fields are needed. Fast neutrons and carbon ions exhibit comparable LET values and similar radiobiological properties. Consequently, the findings obtained with each radiation quality could be shared to benefit knowledge in all concerned domains. The p(66+Be) neutron therapy facilities of iThemba LABS (South Africa) and the p(65)+Be neutron facility of Louvain-la-Neuve (Belgium) are in constant use to do radiobiological research for clinical applications with fast neutrons. These beams - which comply with all physical and technical requirements for clinical applications - are now fully reliable, easy to use and frequently accessible for radiobiological investigations. These facilities thus provide unique opportunities to undertake radiobiological experimentation, especially for investigations that require long irradiation times and/or fractionated treatments.

  15. Implementation of checklists in health care; learning from high-reliability organisations

    Directory of Open Access Journals (Sweden)

    Lossius Hans

    2011-10-01

    Full Text Available Abstract Background Checklists are common in some medical fields, including surgery, intensive care and emergency medicine. They can be an effective tool to improve care processes and reduce mortality and morbidity. Despite the seemingly rapid acceptance and dissemination of the checklist, there are few studies describing the actual process of developing and implementing such tools in health care. The aim of this study is to explore the experiences from checklist development and implementation in a group of non-medical, high reliability organisations (HROs. Method A qualitative study based on key informant interviews and field visits followed by a Delphi approach. Eight informants, each with 10-30 years of checklist experience, were recruited from six different HROs. Results The interviews generated 84 assertions and recommendations for checklist implementation. To achieve checklist acceptance and compliance, there must be a predefined need for which a checklist is considered a well suited solution. The end-users ("sharp-end" are the key stakeholders throughout the development and implementation process. Proximity and ownership must be assured through a thorough and wise process. All informants underlined the importance of short, self-developed, and operationally-suited checklists. Simulation is a valuable and widely used method for training, revision, and validation. Conclusion Checklists have been a cornerstone of safety management in HROs for nearly a century, and are becoming increasingly popular in medicine. Acceptance and compliance are crucial for checklist implementation in health care. Experiences from HROs may provide valuable input to checklist implementation in healthcare.

  16. Nordic perspectives on safety management in high reliability organizations: Theory and applications

    International Nuclear Information System (INIS)

    Svenson, Ola; Salo, I.; Sjerve, A.B.; Reiman, T.; Oedewald, P.

    2006-04-01

    The chapters in this volume are written on a stand-alone basis meaning that the chapters can be read in any order. The first 4 chapters focus on theory and method in general with some applied examples illustrating the methods and theories. Chapters 5 and 6 are about safety management in the aviation industry with some additional information about incident reporting in the aviation industry and the health care sector. Chapters 7 through 9 cover safety management with applied examples from the nuclear power industry and with considerable validity for safety management in any industry. Chapters 10 through 12 cover generic safety issues with examples from the oil industry and chapter 13 presents issues related to organizations with different internal organizational structures. Although the many of the chapters use a specific industry to illustrate safety management, the messages in all the chapters are of importance for safety management in any high reliability industry or risky activity. The interested reader is also referred to, e.g., a document by an international NEA group (SEGHOF), who is about to publish a state of the art report on Systematic Approaches to Safety Management (cf., CSNI/NEA/SEGHOF, home page: www.nea.fr). (au)

  17. Nordic perspectives on safety management in high reliability organizations: Theory and applications

    Energy Technology Data Exchange (ETDEWEB)

    Svenson, Ola; Salo, I; Sjerve, A B; Reiman, T; Oedewald, P [Stockholm Univ. (Sweden)

    2006-04-15

    The chapters in this volume are written on a stand-alone basis meaning that the chapters can be read in any order. The first 4 chapters focus on theory and method in general with some applied examples illustrating the methods and theories. Chapters 5 and 6 are about safety management in the aviation industry with some additional information about incident reporting in the aviation industry and the health care sector. Chapters 7 through 9 cover safety management with applied examples from the nuclear power industry and with considerable validity for safety management in any industry. Chapters 10 through 12 cover generic safety issues with examples from the oil industry and chapter 13 presents issues related to organizations with different internal organizational structures. Although the many of the chapters use a specific industry to illustrate safety management, the messages in all the chapters are of importance for safety management in any high reliability industry or risky activity. The interested reader is also referred to, e.g., a document by an international NEA group (SEGHOF), who is about to publish a state of the art report on Systematic Approaches to Safety Management (cf., CSNI/NEA/SEGHOF, home page: www.nea.fr). (au)

  18. Feasibility assessment of optical technologies for reliable high capacity feeder links

    Science.gov (United States)

    Witternigg, Norbert; Schönhuber, Michael; Leitgeb, Erich; Plank, Thomas

    2013-08-01

    Space telecom scenarios like data relay satellite and broadband/broadcast service providers require reliable feeder links with high bandwidth/data rate for the communication between ground station and satellite. Free space optical communication (FSOC) is an attractive alternative to microwave links, improving performance by offering abundant bandwidth at small apertures of the optical terminals. At the same time Near-Earth communication by FSOC avoids interference with other services and is free of regulatory issues. The drawback however is the impairment by the laser propagation through the atmosphere at optical wavelengths. Also to be considered are questions of eye safety for ground personnel and aviation. In this paper we assess the user requirements for typical space telecom scenarios and compare these requirements with solutions using optical data links through the atmosphere. We suggest a site diversity scheme with a number of ground stations and a switching scheme using two optical terminals on-board the satellite. Considering the technology trade-offs between four different optical wavelengths we recommend the future use of 1.5 μm laser technology and calculate a link budget for an atmospheric condition of light haze on the optical path. By comparing link budgets we show an outlook to the future potential use of 10 μm laser technology.

  19. Lifetime validation of high-reliability (>30,000hr) rotary cryocoolers for specific customer profiles

    Science.gov (United States)

    Cauquil, Jean-Marc; Seguineau, Cédric; Vasse, Christophe; Raynal, Gaetan; Benschop, Tonny

    2018-05-01

    The cooler reliability is a major performance requested by the customers, especially for 24h/24h applications, which are a growing market. Thales has built a reliability policy based on accelerate ageing and tests to establish a robust knowledge on acceleration factors. The current trend seems to prove that the RM2 mean time to failure is now higher than 30,000hr. Even with accelerate ageing; the reliability growth becomes hardly manageable for such large figures. The paper focuses on these figures and comments the robustness of such a method when projections over 30,000hr of MTTF are needed.

  20. The challenge of building large area, high precision small-strip Thin Gap Trigger Chambers for the upgrade of the ATLAS experiment

    CERN Document Server

    Maleev, Victor; The ATLAS collaboration

    2015-01-01

    The current innermost stations of the ATLAS muon endcap system must be upgraded in 2018 and 2019 to retain the good precision tracking and trigger capabilities in the high background environment expected with the upcoming luminosity increase of the LHC. Large area small-strip Thin Gap Chambers (sTGC) up to 2 m2 in size and totaling an active area of 1200 m2 will be employed for fast and precise triggering. The precision reconstruction of tracks requires a spatial resolution of about 100 μm to allow the Level-1 trigger track segments to be reconstructed with an angular resolution of 1mrad. The upgraded detector will consist of eight layers each of Micromegas and sTGC’s detectors together forming the ATLAS New Small Wheels. The position of each strip must be known with an accuracy of 30 µm along the precision coordinate and 80 µm along the beam. On such large area detectors, the mechanical precision is a key point and then must be controlled and monitored all along the process of construction and integrati...

  1. The Challenge of Building Large Area, High Precision Small-Strip Thin Gap Trigger Chambers for the Upgrade of the ATLAS Experiment

    CERN Document Server

    Maleev, Victor; The ATLAS collaboration

    2015-01-01

    The current innermost stations of the ATLAS muon end-cap system must be upgraded in 2018 and 2019 to retain the good precision tracking and trigger capabilities in the high background environment expected with the upcoming luminosity increase of the LHC. Large area small-strip Thin Gap Chambers (sTGC) up to 2 $m^2$ in size and totaling an active area of 1200 $m^2$ will be employed for fast and precise triggering. The precision reconstruction of tracks requires a spatial resolution of about 100 $\\mu m$ while the Level-1 trigger track segments need to be reconstructed with an angular resolution of 1 mrad. The upgraded detector will consist of eight layers each of Micromegas and sTGC’s detectors together forming the ATLAS New Small Wheels. The position of each strip must be known with an accuracy of 40 $\\mu m$ along the precision coordinate and 80 $\\mu m$ along the beam. On such large area detectors, the mechanical precision is a key point and then must be controlled and monitored all along the process of cons...

  2. ELM mitigation with pellet ELM triggering and implications for PFCs and plasma performance in ITER

    Energy Technology Data Exchange (ETDEWEB)

    Baylor, Larry R. [ORNL; Lang, P. [EURATOM / UKAEA, Abingdon, UK; Allen, S. L. [Lawrence Livermore National Laboratory (LLNL); Lasnier, C. J. [Lawrence Livermore National Laboratory (LLNL); Meitner, Steven J. [ORNL; Combs, Stephen Kirk [ORNL; Commaux, Nicolas JC [ORNL; Loarte, A. [ITER Organization, Cadarache, France; Jernigan, Thomas C. [ORNL

    2015-08-01

    The triggering of rapid small edge localized modes (ELMs) by high frequency pellet injection has been proposed as a method to prevent large naturally occurring ELMs that can erode the ITER plasma facing components (PFCs). Deuterium pellet injection has been used to successfully demonstrate the on-demand triggering of edge localized modes (ELMs) at much higher rates and with much smaller intensity than natural ELMs. The proposed hypothesis for the triggering mechanism of ELMs by pellets is the local pressure perturbation resulting from reheating of the pellet cloud that can exceed the local high-n ballooning mode threshold where the pellet is injected. Nonlinear MHD simulations of the pellet ELM triggering show destabilization of high-n ballooning modes by such a local pressure perturbation.A review of the recent pellet ELM triggering results from ASDEX Upgrade (AUG), DIII-D, and JET reveals that a number of uncertainties about this ELM mitigation technique still remain. These include the heat flux impact pattern on the divertor and wall from pellet triggered and natural ELMs, the necessary pellet size and injection location to reliably trigger ELMs, and the level of fueling to be expected from ELM triggering pellets and synergy with larger fueling pellets. The implications of these issues for pellet ELM mitigation in ITER and its impact on the PFCs are presented along with the design features of the pellet injection system for ITER.

  3. The Design of High Reliability Magnetic Bearing Systems for Helium Cooled Reactor Machinery

    International Nuclear Information System (INIS)

    Swann, M.; Davies, N.; Jayawant, R.; Leung, R.; Shultz, R.; Gao, R.; Guo, Z.

    2014-01-01

    The requirements for magnetic bearing equipped machinery used in high temperature, helium cooled, graphite moderated reactor applications present a set of design considerations that are unlike most other applications of magnetic bearing technology in large industrial rotating equipment, for example as used in the oil and gas or other power generation applications. In particular, the bearings are typically immersed directly in the process gas in order to take advantage of the design simplicity that comes about from the elimination of ancillary lubrication and cooling systems for bearings and seals. Such duty means that the bearings will usually see high temperatures and pressures in service and will also typically be subject to graphite particulate and attendant radioactive contamination over time. In addition, unlike most industrial applications, seismic loading events become of paramount importance for the magnetic bearings system, both for actuators and controls. The auxiliary bearing design requirements, in particular, become especially demanding when one considers that the whole mechanical structure of the magnetic bearing system is located inside an inaccessible pressure vessel that should be rarely, if ever, disassembled over the service life of the power plant. Lastly, many machinery designs for gas cooled nuclear power plants utilize vertical orientation. This circumstance presents its own unique requirements for the machinery dynamics and bearing loads. Based on the authors’ experience with machine design and supply on several helium cooled reactor projects including Ft. St. Vrain (US), GT-MHR (Russia), PBMR (South Africa), GTHTR (Japan), and most recently HTR-PM (China), this paper addresses many of the design considerations for such machinery and how the application of magnetic bearings directly affects machinery reliability and availability, operability, and maintainability. Remote inspection and diagnostics are a key focus of this paper. (author)

  4. Highly efficient and reliable high power LEDs with patterned sapphire substrate and strip-shaped distributed current blocking layer

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shengjun [School of Power and Mechanical Engineering, Wuhan University, Wuhan 430072 (China); State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Yuan, Shu; Liu, Yingce [Quantum Wafer Inc., Foshan 528251 (China); Guo, L. Jay [Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109 (United States); Liu, Sheng, E-mail: victor_liu63@126.com [School of Power and Mechanical Engineering, Wuhan University, Wuhan 430072 (China); Ding, Han [State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2015-11-15

    Graphical abstract: - Highlights: • TEM is used to characterize threading dislocation existing in GaN epitaxial layer. • Effect of threading dislocation on optical and electrical of LEDs is discussed. • Strip-shaped SiO{sub 2} DCBL is designed to improve current spreading performance of LEDs. - Abstract: We demonstrated that the improvement in optical and electrical performance of high power LEDs was achieved using cone-shaped patterned sapphire substrate (PSS) and strip-shaped SiO{sub 2} distributed current blocking layer (DCBL). We found through transmission electron microscopy (TEM) observation that densities of both the screw dislocation and edge dislocation existing in GaN epitaxial layer grown on PSS were much less than that of GaN epitaxial layer grown on flat sapphire substrate (FSS). Compared to LED grown on FSS, LED grown on PSS showed higher sub-threshold forward-bias voltage and lower reverse leakage current, resulting in an enhancement in device reliability. We also designed a strip-shaped SiO{sub 2} DCBL beneath a strip-shaped p-electrode, which prevents the current from being concentrated on regions immediately adjacent the strip-shaped p-electrode, thereby facilitating uniform current spreading into the active region. By implementing strip-shaped SiO{sub 2} DCBL, light output power of high power PSS-LED chip could be further increased by 13%.

  5. Study for a failsafe trigger generation system for the Large Hadron Collider beam dump kicker magnets

    CERN Document Server

    Rampl, M

    1999-01-01

    The 27 km-particle accelerator Large Hadron Collider (LHC), which will be completed at the European Laboratory for Particle Physics (CERN) in 2005, will work with extremely high beam energies (~334 MJ per beam). Since the equipment and in particular the superconducting magnets must be protected from damage caused by these high energy beams the beam dump must be able to absorb this energy very reliable at every stage of operation. The kicker magnets that extract the particles from the accelerator are synchronised with the beam by the trigger generation system. This thesis is a first study for this electronic module and its functions. A special synchronisation circuit and a very reliable electronic switch were developed. Most functions were implemented in a Gate-Array to improve the reliability and to facilitate modifications during the test stage. This study also comprises the complete concept for the prototype of the trigger generation system. During all project stages reliability was always the main determin...

  6. A Compact, Light-weight, Reliable and Highly Efficient Heat Pump for, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — RTI proposes to develop an efficient, reliable, compact and lightweight heat pump for space applications. The proposed effort is expected to lead to (at the end of...

  7. High-Efficiency Reliable Stirling Generator for Space Exploration Missions, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA needs advanced power-conversion technologies to improve the efficiency and reliability of power conversion for space exploration missions. We propose to develop...

  8. Establishment of quality, reliability and design standards for low, medium, and high power microwave hybrid microcircuits

    Science.gov (United States)

    Robinson, E. A.

    1973-01-01

    Quality, reliability, and design standards for microwave hybrid microcircuits were established. The MSFC Standard 85M03926 for hybrid microcircuits was reviewed and modifications were generated for use with microwave hybrid microcircuits. The results for reliability tests of microwave thin film capacitors, transistors, and microwave circuits are presented. Twenty-two microwave receivers were tested for 13,500 unit hours. The result of 111,121 module burn-in and operating hours for an integrated solid state transceiver module is reported.

  9. The LHCb trigger

    International Nuclear Information System (INIS)

    Korolko, I.

    1998-01-01

    This paper describes progress in the development of the LHCb trigger system since the letter of intent. The trigger philosophy has significantly changed, resulting in an increase of trigger efficiency for signal B events. It is proposed to implement a level-1 vertex topology trigger in specialised hardware. (orig.)

  10. Reliable discrimination of high explosive and chemical/biological artillery using acoustic UGS

    Science.gov (United States)

    Hohil, Myron E.; Desai, Sachi

    2005-10-01

    discrimination between conventional and simulated chemical/biological artillery rounds using acoustic signals produced during detonation. Distinct characteristics arise within the different airburst signatures because high explosive warheads emphasize concussive and shrapnel effects, while chemical/biological warheads are designed to disperse their contents over large areas, therefore employing a slower burning, less intense explosive to mix and spread their contents. The ensuing blast waves are readily characterized by variations in the corresponding peak pressure and rise time of the blast, differences in the ratio of positive pressure amplitude to the negative amplitude, and variations in the overall duration of the resulting waveform. We show that, highly reliable discrimination (> 98%) between conventional and potentially chemical/biological artillery is achieved at ranges exceeding 3km. A feedforward neural network classifier, trained on a feature space derived from the distribution of wavelet coefficients found within different levels of the multiresolution decomposition yields.

  11. Feasibility of prospectively ECG-triggered high-pitch coronary CT angiography with 30 mL iodinated contrast agent at 70 kVp: initial experience

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Long Jiang; Qi, Li; Tang, Chun Xiang; Zhou, Chang Sheng; Ji, Xue Man; Lu, Guang Ming [Medical School of Nanjing University, Department of Medical Imaging, Jinling Hospital, Nanjing, Jiangsu (China); Wang, Jing [Medical School of Nanjing University, Department of Cardiology, Jinling Hospital, Nanjing, Jiangsu (China); Spearman, James V.; De Cecco, Carlo Nicola; Meinel, Felix G. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Schoepf, U.J. [Medical School of Nanjing University, Department of Medical Imaging, Jinling Hospital, Nanjing, Jiangsu (China); Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States)

    2014-07-15

    To evaluate the feasibility, image quality and radiation dose of prospectively ECG-triggered high-pitch coronary CT angiography (CCTA) with 30 mL contrast agent at 70 kVp. Fifty-eight patients with suspected coronary artery disease, a body mass index (BMI) of less than 25 kg/m{sup 2}, sinus rhythm and a heart rate (HR) of less than 70 beats per minute (bpm) were prospectively enrolled in this study. Thirty mL of 370 mg I/mL iodinated contrast agent was administrated at a flow rate of 5 mL/s. All patients underwent prospectively ECG-triggered high-pitch CCTA on a second-generation dual-source CT system at 70 kVp using automated tube current modulation. Fifty-six patients (96.6 %) had diagnostic CCTA images and two patients (3.4 %) had one vessel with poor image quality each rated as non-diagnostic. No significant effects of HR, HR variability and BMI on CCTA image quality were observed (all P > 0.05). Effective dose was 0.17 ± 0.02 mSv and the size-specific dose estimate was 1.03 ± 0.13 mGy. Prospectively ECG-triggered high-pitch CCTA at 70 kVp with 30 mL of contrast agent can provide diagnostic image quality at a radiation dose of less than 0.2 mSv in patients with a BMI of less than 25 kg/m{sup 2} and an HR of less than 70 bpm. (orig.)

  12. New approach for high reliability, low loss splicing between silica and ZBLAN fibers

    Science.gov (United States)

    Carbonnier, Robin; Zheng, Wenxin

    2018-02-01

    In the past decade, ZBLAN (ZrF4-BaF2-LaF3-NaF) fibers have drawn increasing interest for laser operations at wavelengths where Fused Silica-based (SiO2) fibers do not perform well. One limitation to the expansion of ZBLAN fiber lasers today is the difficulty to efficiently inject and extract light in/from the guiding medium using SiO2 fibers. Although free space and butt coupling have provided acceptable results, consistent and long lasting physical joints between SiO2 and ZBLAN fibers will allow smaller, cheaper, and more robust component manufacturing. While low loss splices have been reported using a traditional splicing approach, the very low mechanical strength of the joint makes it difficult to scale. Difficulties in achieving a strong bond are mainly due to the large difference of transition temperature between ZBLAN and SiO2 fibers ( 260°C vs 1175°C). This paper presents results obtained by using the high thermal expansion coefficient of the ZBLAN fiber to encapsulate a smaller SiO2 fiber. A CO2 laser glass processing system was used to control the expansion and contraction of the ZBLAN material during the splicing process for optimum reliability. This method produced splices between 125μm ZBLAN to 80μm SiO2 fibers with average transmission loss of 0.225dB (measured at 1550nm) and average ultimate tension strength of 121.4gf. The Resulting splices can be durably packaged without excessive care. Other combinations using 125μm SiO2 fibers tapered to 80μm are also discussed.

  13. Highly reliable field electron emitters produced from reproducible damage-free carbon nanotube composite pastes with optimal inorganic fillers

    Science.gov (United States)

    Kim, Jae-Woo; Jeong, Jin-Woo; Kang, Jun-Tae; Choi, Sungyoul; Ahn, Seungjoon; Song, Yoon-Ho

    2014-02-01

    Highly reliable field electron emitters were developed using a formulation for reproducible damage-free carbon nanotube (CNT) composite pastes with optimal inorganic fillers and a ball-milling method. We carefully controlled the ball-milling sequence and time to avoid any damage to the CNTs, which incorporated fillers that were fully dispersed as paste constituents. The field electron emitters fabricated by printing the CNT pastes were found to exhibit almost perfect adhesion of the CNT emitters to the cathode, along with good uniformity and reproducibility. A high field enhancement factor of around 10 000 was achieved from the CNT field emitters developed. By selecting nano-sized metal alloys and oxides and using the same formulation sequence, we also developed reliable field emitters that could survive high-temperature post processing. These field emitters had high durability to post vacuum annealing at 950 °C, guaranteeing survival of the brazing process used in the sealing of field emission x-ray tubes. We evaluated the field emitters in a triode configuration in the harsh environment of a tiny vacuum-sealed vessel and observed very reliable operation for 30 h at a high current density of 350 mA cm-2. The CNT pastes and related field emitters that were developed could be usefully applied in reliable field emission devices.

  14. Headache triggers in the US military.

    Science.gov (United States)

    Theeler, Brett J; Kenney, Kimbra; Prokhorenko, Olga A; Fideli, Ulgen S; Campbell, William; Erickson, Jay C

    2010-05-01

    Headaches can be triggered by a variety of factors. Military service members have a high prevalence of headache but the factors triggering headaches in military troops have not been identified. The objective of this study is to determine headache triggers in soldiers and military beneficiaries seeking specialty care for headaches. A total of 172 consecutive US Army soldiers and military dependents (civilians) evaluated at the headache clinics of 2 US Army Medical Centers completed a standardized questionnaire about their headache triggers. A total of 150 (87%) patients were active-duty military members and 22 (13%) patients were civilians. In total, 77% of subjects had migraine; 89% of patients reported at least one headache trigger with a mean of 8.3 triggers per patient. A wide variety of headache triggers was seen with the most common categories being environmental factors (74%), stress (67%), consumption-related factors (60%), and fatigue-related factors (57%). The types of headache triggers identified in active-duty service members were similar to those seen in civilians. Stress-related triggers were significantly more common in soldiers. There were no significant differences in trigger types between soldiers with and without a history of head trauma. Headaches in military service members are triggered mostly by the same factors as in civilians with stress being the most common trigger. Knowledge of headache triggers may be useful for developing strategies that reduce headache occurrence in the military.

  15. Material Selection for Cable Gland to Improved Reliability of the High-hazard Industries

    Science.gov (United States)

    Vashchuk, S. P.; Slobodyan, S. M.; Deeva, V. S.; Vashchuk, D. S.

    2018-01-01

    The sealed cable glands (SCG) are available to ensure safest connection sheathed single wire for the hazard production facility (nuclear power plant and others) the same as pilot cable, control cables, radio-frequency cables et al. In this paper, we investigate the specifics of the material selection of SCG with the express aim of hazardous man-made facility. We discuss the safe working conditions for cable glands. The research indicates the sintering powdered metals cables provide the reliability growth due to their properties. A number of studies have demonstrated the verification of material selection. On the face of it, we make findings indicating that double glazed sealed units could enhance reliability. We had evaluated sample reliability under fire conditions, seismic load, and pressure containment failure. We used the samples mineral insulated thermocouple cable.

  16. SRC: FenixOS - A Research Operating System Focused on High Scalability and Reliability

    DEFF Research Database (Denmark)

    Passas, Stavros; Karlsson, Sven

    2011-01-01

    Computer systems keep increasing in size. Systems scale in the number of processing units, memories and peripheral devices. This creates many and diverse architectural trade-offs that the existing operating systems are not able to address. We are designing and implementing, FenixOS, a new operating...... of the operating system....... system that aims to improve the state of the art in scalability and reliability. We achieve scalability through limiting data sharing when possible, and through extensive use of lock-free data structures. Reliability is addressed with a careful re-design of the programming interface and structure...

  17. Leadership in organizations with high security and reliability requirements; Liderazgo en organizaciones con altos requisitos de seguridad y fiabilidad

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, F.

    2013-07-01

    Developing leadership skills in organizations is the key to ensure the sustain ability of excellent results in industries with high requirements safety and reliability. In order to have a model of leadership development specific to this type of organizations, Tecnatom in 2011, we initiated a project internal, to find and adapt a competency model to these requirements.

  18. Is Learner Self-Assessment Reliable and Valid in a Web-Based Portfolio Environment for High School Students?

    Science.gov (United States)

    Chang, Chi-Cheng; Liang, Chaoyun; Chen, Yi-Hui

    2013-01-01

    This study explored the reliability and validity of Web-based portfolio self-assessment. Participants were 72 senior high school students enrolled in a computer application course. The students created learning portfolios, viewed peers' work, and performed self-assessment on the Web-based portfolio assessment system. The results indicated: 1)…

  19. Intra- and interrater reliability of the Chicago Classification of achalasia subtypes in pediatric high-resolution esophageal manometry (HRM) recordings

    NARCIS (Netherlands)

    Singendonk, M. M. J.; Rosen, R.; Oors, J.; Rommel, N.; van Wijk, M. P.; Benninga, M. A.; Nurko, S.; Omari, T. I.

    2017-01-01

    BackgroundSubtyping achalasia by high-resolution manometry (HRM) is clinically relevant as response to therapy and prognosis have shown to vary accordingly. The aim of this study was to assess inter- and intrarater reliability of diagnosing achalasia and achalasia subtyping in children using the

  20. Are We Hoping For A Bounce A Study On Resilience And Human Relations In A High Reliability Organization

    Science.gov (United States)

    2016-03-01

    negatively impact the organization’s resilience when faced with workplace stressors (Gittell, 2008, p. 26). Consequently, this reduces an organization’s...2014). Organizational resilience and the challenge for human resource management: Conceptualizations and frameworks for theory and practice. Paper...A BOUNCE? A STUDY ON RESILIENCE AND HUMAN RELATIONS IN A HIGH RELIABILITY ORGANIZATION by Robert D. Johns March 2016 Thesis Advisor